Feb  2 11:46:00 np0005605476 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb  2 11:46:00 np0005605476 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb  2 11:46:00 np0005605476 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 11:46:00 np0005605476 kernel: BIOS-provided physical RAM map:
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  2 11:46:00 np0005605476 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb  2 11:46:00 np0005605476 kernel: NX (Execute Disable) protection: active
Feb  2 11:46:00 np0005605476 kernel: APIC: Static calls initialized
Feb  2 11:46:00 np0005605476 kernel: SMBIOS 2.8 present.
Feb  2 11:46:00 np0005605476 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb  2 11:46:00 np0005605476 kernel: Hypervisor detected: KVM
Feb  2 11:46:00 np0005605476 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  2 11:46:00 np0005605476 kernel: kvm-clock: using sched offset of 4451482790 cycles
Feb  2 11:46:00 np0005605476 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  2 11:46:00 np0005605476 kernel: tsc: Detected 2800.000 MHz processor
Feb  2 11:46:00 np0005605476 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb  2 11:46:00 np0005605476 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb  2 11:46:00 np0005605476 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  2 11:46:00 np0005605476 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb  2 11:46:00 np0005605476 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb  2 11:46:00 np0005605476 kernel: Using GB pages for direct mapping
Feb  2 11:46:00 np0005605476 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb  2 11:46:00 np0005605476 kernel: ACPI: Early table checksum verification disabled
Feb  2 11:46:00 np0005605476 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb  2 11:46:00 np0005605476 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 11:46:00 np0005605476 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 11:46:00 np0005605476 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 11:46:00 np0005605476 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb  2 11:46:00 np0005605476 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 11:46:00 np0005605476 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 11:46:00 np0005605476 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb  2 11:46:00 np0005605476 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb  2 11:46:00 np0005605476 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb  2 11:46:00 np0005605476 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb  2 11:46:00 np0005605476 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb  2 11:46:00 np0005605476 kernel: No NUMA configuration found
Feb  2 11:46:00 np0005605476 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb  2 11:46:00 np0005605476 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Feb  2 11:46:00 np0005605476 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb  2 11:46:00 np0005605476 kernel: Zone ranges:
Feb  2 11:46:00 np0005605476 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  2 11:46:00 np0005605476 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb  2 11:46:00 np0005605476 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 11:46:00 np0005605476 kernel:  Device   empty
Feb  2 11:46:00 np0005605476 kernel: Movable zone start for each node
Feb  2 11:46:00 np0005605476 kernel: Early memory node ranges
Feb  2 11:46:00 np0005605476 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  2 11:46:00 np0005605476 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb  2 11:46:00 np0005605476 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 11:46:00 np0005605476 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb  2 11:46:00 np0005605476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  2 11:46:00 np0005605476 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  2 11:46:00 np0005605476 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb  2 11:46:00 np0005605476 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  2 11:46:00 np0005605476 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  2 11:46:00 np0005605476 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  2 11:46:00 np0005605476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  2 11:46:00 np0005605476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  2 11:46:00 np0005605476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  2 11:46:00 np0005605476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  2 11:46:00 np0005605476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  2 11:46:00 np0005605476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  2 11:46:00 np0005605476 kernel: TSC deadline timer available
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Max. logical packages:   8
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Max. logical dies:       8
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Max. dies per package:   1
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Max. threads per core:   1
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Num. cores per package:     1
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Num. threads per package:   1
Feb  2 11:46:00 np0005605476 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb  2 11:46:00 np0005605476 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb  2 11:46:00 np0005605476 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb  2 11:46:00 np0005605476 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb  2 11:46:00 np0005605476 kernel: Booting paravirtualized kernel on KVM
Feb  2 11:46:00 np0005605476 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  2 11:46:00 np0005605476 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb  2 11:46:00 np0005605476 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb  2 11:46:00 np0005605476 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  2 11:46:00 np0005605476 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 11:46:00 np0005605476 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb  2 11:46:00 np0005605476 kernel: random: crng init done
Feb  2 11:46:00 np0005605476 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: Fallback order for Node 0: 0 
Feb  2 11:46:00 np0005605476 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb  2 11:46:00 np0005605476 kernel: Policy zone: Normal
Feb  2 11:46:00 np0005605476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  2 11:46:00 np0005605476 kernel: software IO TLB: area num 8.
Feb  2 11:46:00 np0005605476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb  2 11:46:00 np0005605476 kernel: ftrace: allocating 49438 entries in 194 pages
Feb  2 11:46:00 np0005605476 kernel: ftrace: allocated 194 pages with 3 groups
Feb  2 11:46:00 np0005605476 kernel: Dynamic Preempt: voluntary
Feb  2 11:46:00 np0005605476 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb  2 11:46:00 np0005605476 kernel: rcu: #011RCU event tracing is enabled.
Feb  2 11:46:00 np0005605476 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb  2 11:46:00 np0005605476 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb  2 11:46:00 np0005605476 kernel: #011Rude variant of Tasks RCU enabled.
Feb  2 11:46:00 np0005605476 kernel: #011Tracing variant of Tasks RCU enabled.
Feb  2 11:46:00 np0005605476 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  2 11:46:00 np0005605476 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb  2 11:46:00 np0005605476 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 11:46:00 np0005605476 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 11:46:00 np0005605476 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 11:46:00 np0005605476 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb  2 11:46:00 np0005605476 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb  2 11:46:00 np0005605476 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb  2 11:46:00 np0005605476 kernel: Console: colour VGA+ 80x25
Feb  2 11:46:00 np0005605476 kernel: printk: console [ttyS0] enabled
Feb  2 11:46:00 np0005605476 kernel: ACPI: Core revision 20230331
Feb  2 11:46:00 np0005605476 kernel: APIC: Switch to symmetric I/O mode setup
Feb  2 11:46:00 np0005605476 kernel: x2apic enabled
Feb  2 11:46:00 np0005605476 kernel: APIC: Switched APIC routing to: physical x2apic
Feb  2 11:46:00 np0005605476 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  2 11:46:00 np0005605476 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb  2 11:46:00 np0005605476 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  2 11:46:00 np0005605476 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  2 11:46:00 np0005605476 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  2 11:46:00 np0005605476 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb  2 11:46:00 np0005605476 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb  2 11:46:00 np0005605476 kernel: Spectre V2 : Mitigation: Retpolines
Feb  2 11:46:00 np0005605476 kernel: RETBleed: Mitigation: untrained return thunk
Feb  2 11:46:00 np0005605476 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb  2 11:46:00 np0005605476 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  2 11:46:00 np0005605476 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb  2 11:46:00 np0005605476 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  2 11:46:00 np0005605476 kernel: active return thunk: retbleed_return_thunk
Feb  2 11:46:00 np0005605476 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  2 11:46:00 np0005605476 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  2 11:46:00 np0005605476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  2 11:46:00 np0005605476 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  2 11:46:00 np0005605476 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  2 11:46:00 np0005605476 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb  2 11:46:00 np0005605476 kernel: Freeing SMP alternatives memory: 40K
Feb  2 11:46:00 np0005605476 kernel: pid_max: default: 32768 minimum: 301
Feb  2 11:46:00 np0005605476 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb  2 11:46:00 np0005605476 kernel: landlock: Up and running.
Feb  2 11:46:00 np0005605476 kernel: Yama: becoming mindful.
Feb  2 11:46:00 np0005605476 kernel: SELinux:  Initializing.
Feb  2 11:46:00 np0005605476 kernel: LSM support for eBPF active
Feb  2 11:46:00 np0005605476 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  2 11:46:00 np0005605476 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  2 11:46:00 np0005605476 kernel: ... version:                0
Feb  2 11:46:00 np0005605476 kernel: ... bit width:              48
Feb  2 11:46:00 np0005605476 kernel: ... generic registers:      6
Feb  2 11:46:00 np0005605476 kernel: ... value mask:             0000ffffffffffff
Feb  2 11:46:00 np0005605476 kernel: ... max period:             00007fffffffffff
Feb  2 11:46:00 np0005605476 kernel: ... fixed-purpose events:   0
Feb  2 11:46:00 np0005605476 kernel: ... event mask:             000000000000003f
Feb  2 11:46:00 np0005605476 kernel: signal: max sigframe size: 1776
Feb  2 11:46:00 np0005605476 kernel: rcu: Hierarchical SRCU implementation.
Feb  2 11:46:00 np0005605476 kernel: rcu: #011Max phase no-delay instances is 400.
Feb  2 11:46:00 np0005605476 kernel: smp: Bringing up secondary CPUs ...
Feb  2 11:46:00 np0005605476 kernel: smpboot: x86: Booting SMP configuration:
Feb  2 11:46:00 np0005605476 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb  2 11:46:00 np0005605476 kernel: smp: Brought up 1 node, 8 CPUs
Feb  2 11:46:00 np0005605476 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb  2 11:46:00 np0005605476 kernel: node 0 deferred pages initialised in 8ms
Feb  2 11:46:00 np0005605476 kernel: Memory: 7763724K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618412K reserved, 0K cma-reserved)
Feb  2 11:46:00 np0005605476 kernel: devtmpfs: initialized
Feb  2 11:46:00 np0005605476 kernel: x86/mm: Memory block size: 128MB
Feb  2 11:46:00 np0005605476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  2 11:46:00 np0005605476 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb  2 11:46:00 np0005605476 kernel: pinctrl core: initialized pinctrl subsystem
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  2 11:46:00 np0005605476 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb  2 11:46:00 np0005605476 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb  2 11:46:00 np0005605476 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb  2 11:46:00 np0005605476 kernel: audit: initializing netlink subsys (disabled)
Feb  2 11:46:00 np0005605476 kernel: audit: type=2000 audit(1770050759.174:1): state=initialized audit_enabled=0 res=1
Feb  2 11:46:00 np0005605476 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb  2 11:46:00 np0005605476 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  2 11:46:00 np0005605476 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  2 11:46:00 np0005605476 kernel: cpuidle: using governor menu
Feb  2 11:46:00 np0005605476 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  2 11:46:00 np0005605476 kernel: PCI: Using configuration type 1 for base access
Feb  2 11:46:00 np0005605476 kernel: PCI: Using configuration type 1 for extended access
Feb  2 11:46:00 np0005605476 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  2 11:46:00 np0005605476 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb  2 11:46:00 np0005605476 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb  2 11:46:00 np0005605476 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb  2 11:46:00 np0005605476 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb  2 11:46:00 np0005605476 kernel: Demotion targets for Node 0: null
Feb  2 11:46:00 np0005605476 kernel: cryptd: max_cpu_qlen set to 1000
Feb  2 11:46:00 np0005605476 kernel: ACPI: Added _OSI(Module Device)
Feb  2 11:46:00 np0005605476 kernel: ACPI: Added _OSI(Processor Device)
Feb  2 11:46:00 np0005605476 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  2 11:46:00 np0005605476 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  2 11:46:00 np0005605476 kernel: ACPI: Interpreter enabled
Feb  2 11:46:00 np0005605476 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb  2 11:46:00 np0005605476 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  2 11:46:00 np0005605476 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  2 11:46:00 np0005605476 kernel: PCI: Using E820 reservations for host bridge windows
Feb  2 11:46:00 np0005605476 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  2 11:46:00 np0005605476 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [3] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [4] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [5] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [6] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [7] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [8] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [9] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [10] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [11] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [12] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [13] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [14] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [15] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [16] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [17] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [18] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [19] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [20] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [21] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [22] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [23] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [24] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [25] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [26] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [27] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [28] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [29] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [30] registered
Feb  2 11:46:00 np0005605476 kernel: acpiphp: Slot [31] registered
Feb  2 11:46:00 np0005605476 kernel: PCI host bridge to bus 0000:00
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  2 11:46:00 np0005605476 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  2 11:46:00 np0005605476 kernel: iommu: Default domain type: Translated
Feb  2 11:46:00 np0005605476 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb  2 11:46:00 np0005605476 kernel: SCSI subsystem initialized
Feb  2 11:46:00 np0005605476 kernel: ACPI: bus type USB registered
Feb  2 11:46:00 np0005605476 kernel: usbcore: registered new interface driver usbfs
Feb  2 11:46:00 np0005605476 kernel: usbcore: registered new interface driver hub
Feb  2 11:46:00 np0005605476 kernel: usbcore: registered new device driver usb
Feb  2 11:46:00 np0005605476 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  2 11:46:00 np0005605476 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  2 11:46:00 np0005605476 kernel: PTP clock support registered
Feb  2 11:46:00 np0005605476 kernel: EDAC MC: Ver: 3.0.0
Feb  2 11:46:00 np0005605476 kernel: NetLabel: Initializing
Feb  2 11:46:00 np0005605476 kernel: NetLabel:  domain hash size = 128
Feb  2 11:46:00 np0005605476 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb  2 11:46:00 np0005605476 kernel: NetLabel:  unlabeled traffic allowed by default
Feb  2 11:46:00 np0005605476 kernel: PCI: Using ACPI for IRQ routing
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  2 11:46:00 np0005605476 kernel: vgaarb: loaded
Feb  2 11:46:00 np0005605476 kernel: clocksource: Switched to clocksource kvm-clock
Feb  2 11:46:00 np0005605476 kernel: VFS: Disk quotas dquot_6.6.0
Feb  2 11:46:00 np0005605476 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  2 11:46:00 np0005605476 kernel: pnp: PnP ACPI init
Feb  2 11:46:00 np0005605476 kernel: pnp: PnP ACPI: found 5 devices
Feb  2 11:46:00 np0005605476 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_INET protocol family
Feb  2 11:46:00 np0005605476 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb  2 11:46:00 np0005605476 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_XDP protocol family
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb  2 11:46:00 np0005605476 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  2 11:46:00 np0005605476 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  2 11:46:00 np0005605476 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 42055 usecs
Feb  2 11:46:00 np0005605476 kernel: PCI: CLS 0 bytes, default 64
Feb  2 11:46:00 np0005605476 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb  2 11:46:00 np0005605476 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb  2 11:46:00 np0005605476 kernel: ACPI: bus type thunderbolt registered
Feb  2 11:46:00 np0005605476 kernel: Trying to unpack rootfs image as initramfs...
Feb  2 11:46:00 np0005605476 kernel: Initialise system trusted keyrings
Feb  2 11:46:00 np0005605476 kernel: Key type blacklist registered
Feb  2 11:46:00 np0005605476 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb  2 11:46:00 np0005605476 kernel: zbud: loaded
Feb  2 11:46:00 np0005605476 kernel: integrity: Platform Keyring initialized
Feb  2 11:46:00 np0005605476 kernel: integrity: Machine keyring initialized
Feb  2 11:46:00 np0005605476 kernel: Freeing initrd memory: 88000K
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_ALG protocol family
Feb  2 11:46:00 np0005605476 kernel: xor: automatically using best checksumming function   avx       
Feb  2 11:46:00 np0005605476 kernel: Key type asymmetric registered
Feb  2 11:46:00 np0005605476 kernel: Asymmetric key parser 'x509' registered
Feb  2 11:46:00 np0005605476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb  2 11:46:00 np0005605476 kernel: io scheduler mq-deadline registered
Feb  2 11:46:00 np0005605476 kernel: io scheduler kyber registered
Feb  2 11:46:00 np0005605476 kernel: io scheduler bfq registered
Feb  2 11:46:00 np0005605476 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb  2 11:46:00 np0005605476 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb  2 11:46:00 np0005605476 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb  2 11:46:00 np0005605476 kernel: ACPI: button: Power Button [PWRF]
Feb  2 11:46:00 np0005605476 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  2 11:46:00 np0005605476 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  2 11:46:00 np0005605476 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  2 11:46:00 np0005605476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  2 11:46:00 np0005605476 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  2 11:46:00 np0005605476 kernel: Non-volatile memory driver v1.3
Feb  2 11:46:00 np0005605476 kernel: rdac: device handler registered
Feb  2 11:46:00 np0005605476 kernel: hp_sw: device handler registered
Feb  2 11:46:00 np0005605476 kernel: emc: device handler registered
Feb  2 11:46:00 np0005605476 kernel: alua: device handler registered
Feb  2 11:46:00 np0005605476 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb  2 11:46:00 np0005605476 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb  2 11:46:00 np0005605476 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb  2 11:46:00 np0005605476 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb  2 11:46:00 np0005605476 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb  2 11:46:00 np0005605476 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb  2 11:46:00 np0005605476 kernel: usb usb1: Product: UHCI Host Controller
Feb  2 11:46:00 np0005605476 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb  2 11:46:00 np0005605476 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb  2 11:46:00 np0005605476 kernel: hub 1-0:1.0: USB hub found
Feb  2 11:46:00 np0005605476 kernel: hub 1-0:1.0: 2 ports detected
Feb  2 11:46:00 np0005605476 kernel: usbcore: registered new interface driver usbserial_generic
Feb  2 11:46:00 np0005605476 kernel: usbserial: USB Serial support registered for generic
Feb  2 11:46:00 np0005605476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  2 11:46:00 np0005605476 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  2 11:46:00 np0005605476 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  2 11:46:00 np0005605476 kernel: mousedev: PS/2 mouse device common for all mice
Feb  2 11:46:00 np0005605476 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb  2 11:46:00 np0005605476 kernel: rtc_cmos 00:04: registered as rtc0
Feb  2 11:46:00 np0005605476 kernel: rtc_cmos 00:04: setting system clock to 2026-02-02T16:45:59 UTC (1770050759)
Feb  2 11:46:00 np0005605476 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb  2 11:46:00 np0005605476 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb  2 11:46:00 np0005605476 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb  2 11:46:00 np0005605476 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  2 11:46:00 np0005605476 kernel: usbcore: registered new interface driver usbhid
Feb  2 11:46:00 np0005605476 kernel: usbhid: USB HID core driver
Feb  2 11:46:00 np0005605476 kernel: drop_monitor: Initializing network drop monitor service
Feb  2 11:46:00 np0005605476 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb  2 11:46:00 np0005605476 kernel: Initializing XFRM netlink socket
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_INET6 protocol family
Feb  2 11:46:00 np0005605476 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb  2 11:46:00 np0005605476 kernel: Segment Routing with IPv6
Feb  2 11:46:00 np0005605476 kernel: NET: Registered PF_PACKET protocol family
Feb  2 11:46:00 np0005605476 kernel: mpls_gso: MPLS GSO support
Feb  2 11:46:00 np0005605476 kernel: IPI shorthand broadcast: enabled
Feb  2 11:46:00 np0005605476 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  2 11:46:00 np0005605476 kernel: AES CTR mode by8 optimization enabled
Feb  2 11:46:00 np0005605476 kernel: sched_clock: Marking stable (926003630, 139067100)->(1132460730, -67390000)
Feb  2 11:46:00 np0005605476 kernel: registered taskstats version 1
Feb  2 11:46:00 np0005605476 kernel: Loading compiled-in X.509 certificates
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb  2 11:46:00 np0005605476 kernel: Demotion targets for Node 0: null
Feb  2 11:46:00 np0005605476 kernel: page_owner is disabled
Feb  2 11:46:00 np0005605476 kernel: Key type .fscrypt registered
Feb  2 11:46:00 np0005605476 kernel: Key type fscrypt-provisioning registered
Feb  2 11:46:00 np0005605476 kernel: Key type big_key registered
Feb  2 11:46:00 np0005605476 kernel: Key type encrypted registered
Feb  2 11:46:00 np0005605476 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  2 11:46:00 np0005605476 kernel: Loading compiled-in module X.509 certificates
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 11:46:00 np0005605476 kernel: ima: Allocated hash algorithm: sha256
Feb  2 11:46:00 np0005605476 kernel: ima: No architecture policies found
Feb  2 11:46:00 np0005605476 kernel: evm: Initialising EVM extended attributes:
Feb  2 11:46:00 np0005605476 kernel: evm: security.selinux
Feb  2 11:46:00 np0005605476 kernel: evm: security.SMACK64 (disabled)
Feb  2 11:46:00 np0005605476 kernel: evm: security.SMACK64EXEC (disabled)
Feb  2 11:46:00 np0005605476 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb  2 11:46:00 np0005605476 kernel: evm: security.SMACK64MMAP (disabled)
Feb  2 11:46:00 np0005605476 kernel: evm: security.apparmor (disabled)
Feb  2 11:46:00 np0005605476 kernel: evm: security.ima
Feb  2 11:46:00 np0005605476 kernel: evm: security.capability
Feb  2 11:46:00 np0005605476 kernel: evm: HMAC attrs: 0x1
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb  2 11:46:00 np0005605476 kernel: Running certificate verification RSA selftest
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb  2 11:46:00 np0005605476 kernel: Running certificate verification ECDSA selftest
Feb  2 11:46:00 np0005605476 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb  2 11:46:00 np0005605476 kernel: clk: Disabling unused clocks
Feb  2 11:46:00 np0005605476 kernel: Freeing unused decrypted memory: 2028K
Feb  2 11:46:00 np0005605476 kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb  2 11:46:00 np0005605476 kernel: Write protecting the kernel read-only data: 30720k
Feb  2 11:46:00 np0005605476 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb  2 11:46:00 np0005605476 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb  2 11:46:00 np0005605476 kernel: Run /init as init process
Feb  2 11:46:00 np0005605476 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 11:46:00 np0005605476 systemd: Detected virtualization kvm.
Feb  2 11:46:00 np0005605476 systemd: Detected architecture x86-64.
Feb  2 11:46:00 np0005605476 systemd: Running in initrd.
Feb  2 11:46:00 np0005605476 systemd: No hostname configured, using default hostname.
Feb  2 11:46:00 np0005605476 systemd: Hostname set to <localhost>.
Feb  2 11:46:00 np0005605476 systemd: Initializing machine ID from VM UUID.
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: Product: QEMU USB Tablet
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: Manufacturer: QEMU
Feb  2 11:46:00 np0005605476 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb  2 11:46:00 np0005605476 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb  2 11:46:00 np0005605476 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb  2 11:46:00 np0005605476 systemd: Queued start job for default target Initrd Default Target.
Feb  2 11:46:00 np0005605476 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 11:46:00 np0005605476 systemd: Reached target Local Encrypted Volumes.
Feb  2 11:46:00 np0005605476 systemd: Reached target Initrd /usr File System.
Feb  2 11:46:00 np0005605476 systemd: Reached target Local File Systems.
Feb  2 11:46:00 np0005605476 systemd: Reached target Path Units.
Feb  2 11:46:00 np0005605476 systemd: Reached target Slice Units.
Feb  2 11:46:00 np0005605476 systemd: Reached target Swaps.
Feb  2 11:46:00 np0005605476 systemd: Reached target Timer Units.
Feb  2 11:46:00 np0005605476 systemd: Listening on D-Bus System Message Bus Socket.
Feb  2 11:46:00 np0005605476 systemd: Listening on Journal Socket (/dev/log).
Feb  2 11:46:00 np0005605476 systemd: Listening on Journal Socket.
Feb  2 11:46:00 np0005605476 systemd: Listening on udev Control Socket.
Feb  2 11:46:00 np0005605476 systemd: Listening on udev Kernel Socket.
Feb  2 11:46:00 np0005605476 systemd: Reached target Socket Units.
Feb  2 11:46:00 np0005605476 systemd: Starting Create List of Static Device Nodes...
Feb  2 11:46:00 np0005605476 systemd: Starting Journal Service...
Feb  2 11:46:00 np0005605476 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 11:46:00 np0005605476 systemd: Starting Apply Kernel Variables...
Feb  2 11:46:00 np0005605476 systemd: Starting Create System Users...
Feb  2 11:46:00 np0005605476 systemd: Starting Setup Virtual Console...
Feb  2 11:46:00 np0005605476 systemd: Finished Create List of Static Device Nodes.
Feb  2 11:46:00 np0005605476 systemd: Finished Apply Kernel Variables.
Feb  2 11:46:00 np0005605476 systemd: Finished Create System Users.
Feb  2 11:46:00 np0005605476 systemd-journald[304]: Journal started
Feb  2 11:46:00 np0005605476 systemd-journald[304]: Runtime Journal (/run/log/journal/cb1779c6d1fa4b89a494cd579a1210f6) is 8.0M, max 153.6M, 145.6M free.
Feb  2 11:46:00 np0005605476 systemd-sysusers[309]: Creating group 'users' with GID 100.
Feb  2 11:46:00 np0005605476 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Feb  2 11:46:00 np0005605476 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb  2 11:46:00 np0005605476 systemd: Started Journal Service.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 11:46:00 np0005605476 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 11:46:00 np0005605476 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 11:46:00 np0005605476 systemd[1]: Finished Setup Virtual Console.
Feb  2 11:46:00 np0005605476 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting dracut cmdline hook...
Feb  2 11:46:00 np0005605476 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Feb  2 11:46:00 np0005605476 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 11:46:00 np0005605476 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 11:46:00 np0005605476 systemd[1]: Finished dracut cmdline hook.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting dracut pre-udev hook...
Feb  2 11:46:00 np0005605476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  2 11:46:00 np0005605476 kernel: device-mapper: uevent: version 1.0.3
Feb  2 11:46:00 np0005605476 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb  2 11:46:00 np0005605476 kernel: RPC: Registered named UNIX socket transport module.
Feb  2 11:46:00 np0005605476 kernel: RPC: Registered udp transport module.
Feb  2 11:46:00 np0005605476 kernel: RPC: Registered tcp transport module.
Feb  2 11:46:00 np0005605476 kernel: RPC: Registered tcp-with-tls transport module.
Feb  2 11:46:00 np0005605476 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  2 11:46:00 np0005605476 rpc.statd[441]: Version 2.5.4 starting
Feb  2 11:46:00 np0005605476 rpc.statd[441]: Initializing NSM state
Feb  2 11:46:00 np0005605476 rpc.idmapd[446]: Setting log level to 0
Feb  2 11:46:00 np0005605476 systemd[1]: Finished dracut pre-udev hook.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 11:46:00 np0005605476 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 11:46:00 np0005605476 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting dracut pre-trigger hook...
Feb  2 11:46:00 np0005605476 systemd[1]: Finished dracut pre-trigger hook.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting Coldplug All udev Devices...
Feb  2 11:46:00 np0005605476 systemd[1]: Created slice Slice /system/modprobe.
Feb  2 11:46:00 np0005605476 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 11:46:00 np0005605476 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 11:46:00 np0005605476 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 11:46:00 np0005605476 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 11:46:00 np0005605476 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 11:46:00 np0005605476 systemd[1]: Reached target Network.
Feb  2 11:46:00 np0005605476 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 11:46:00 np0005605476 systemd[1]: Starting dracut initqueue hook...
Feb  2 11:46:00 np0005605476 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb  2 11:46:00 np0005605476 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb  2 11:46:00 np0005605476 kernel: vda: vda1
Feb  2 11:46:00 np0005605476 kernel: scsi host0: ata_piix
Feb  2 11:46:00 np0005605476 kernel: scsi host1: ata_piix
Feb  2 11:46:00 np0005605476 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb  2 11:46:00 np0005605476 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb  2 11:46:00 np0005605476 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 11:46:00 np0005605476 systemd[1]: Reached target Initrd Root Device.
Feb  2 11:46:01 np0005605476 systemd[1]: Mounting Kernel Configuration File System...
Feb  2 11:46:01 np0005605476 kernel: ata1: found unknown device (class 0)
Feb  2 11:46:01 np0005605476 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  2 11:46:01 np0005605476 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  2 11:46:01 np0005605476 systemd-udevd[480]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 11:46:01 np0005605476 systemd[1]: Mounted Kernel Configuration File System.
Feb  2 11:46:01 np0005605476 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target System Initialization.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Basic System.
Feb  2 11:46:01 np0005605476 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  2 11:46:01 np0005605476 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  2 11:46:01 np0005605476 systemd[1]: Finished dracut initqueue hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Remote Encrypted Volumes.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Remote File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting dracut pre-mount hook...
Feb  2 11:46:01 np0005605476 systemd[1]: Finished dracut pre-mount hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb  2 11:46:01 np0005605476 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Feb  2 11:46:01 np0005605476 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 11:46:01 np0005605476 systemd[1]: Mounting /sysroot...
Feb  2 11:46:01 np0005605476 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb  2 11:46:01 np0005605476 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb  2 11:46:01 np0005605476 kernel: XFS (vda1): Ending clean mount
Feb  2 11:46:01 np0005605476 systemd[1]: Mounted /sysroot.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Initrd Root File System.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb  2 11:46:01 np0005605476 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Initrd File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Reached target Initrd Default Target.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting dracut mount hook...
Feb  2 11:46:01 np0005605476 systemd[1]: Finished dracut mount hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb  2 11:46:01 np0005605476 rpc.idmapd[446]: exiting on signal 15
Feb  2 11:46:01 np0005605476 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Network.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Timer Units.
Feb  2 11:46:01 np0005605476 systemd[1]: dbus.socket: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb  2 11:46:01 np0005605476 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Initrd Default Target.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Basic System.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Initrd Root Device.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Initrd /usr File System.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Path Units.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Remote File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Slice Units.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Socket Units.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target System Initialization.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Local File Systems.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Swaps.
Feb  2 11:46:01 np0005605476 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped dracut mount hook.
Feb  2 11:46:01 np0005605476 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped dracut pre-mount hook.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped target Local Encrypted Volumes.
Feb  2 11:46:01 np0005605476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb  2 11:46:01 np0005605476 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped dracut initqueue hook.
Feb  2 11:46:01 np0005605476 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 11:46:01 np0005605476 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Create Volatile Files and Directories.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Coldplug All udev Devices.
Feb  2 11:46:02 np0005605476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped dracut pre-trigger hook.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Setup Virtual Console.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Closed udev Control Socket.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Closed udev Kernel Socket.
Feb  2 11:46:02 np0005605476 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped dracut pre-udev hook.
Feb  2 11:46:02 np0005605476 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped dracut cmdline hook.
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Cleanup udev Database...
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb  2 11:46:02 np0005605476 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Create List of Static Device Nodes.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Stopped Create System Users.
Feb  2 11:46:02 np0005605476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb  2 11:46:02 np0005605476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Cleanup udev Database.
Feb  2 11:46:02 np0005605476 systemd[1]: Reached target Switch Root.
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Switch Root...
Feb  2 11:46:02 np0005605476 systemd[1]: Switching root.
Feb  2 11:46:02 np0005605476 systemd-journald[304]: Journal stopped
Feb  2 11:46:02 np0005605476 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb  2 11:46:02 np0005605476 kernel: audit: type=1404 audit(1770050762.194:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 11:46:02 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 11:46:02 np0005605476 kernel: audit: type=1403 audit(1770050762.295:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  2 11:46:02 np0005605476 systemd: Successfully loaded SELinux policy in 107.839ms.
Feb  2 11:46:02 np0005605476 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.967ms.
Feb  2 11:46:02 np0005605476 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 11:46:02 np0005605476 systemd: Detected virtualization kvm.
Feb  2 11:46:02 np0005605476 systemd: Detected architecture x86-64.
Feb  2 11:46:02 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 11:46:02 np0005605476 systemd: initrd-switch-root.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd: Stopped Switch Root.
Feb  2 11:46:02 np0005605476 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  2 11:46:02 np0005605476 systemd: Created slice Slice /system/getty.
Feb  2 11:46:02 np0005605476 systemd: Created slice Slice /system/serial-getty.
Feb  2 11:46:02 np0005605476 systemd: Created slice Slice /system/sshd-keygen.
Feb  2 11:46:02 np0005605476 systemd: Created slice User and Session Slice.
Feb  2 11:46:02 np0005605476 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 11:46:02 np0005605476 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb  2 11:46:02 np0005605476 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb  2 11:46:02 np0005605476 systemd: Reached target Local Encrypted Volumes.
Feb  2 11:46:02 np0005605476 systemd: Stopped target Switch Root.
Feb  2 11:46:02 np0005605476 systemd: Stopped target Initrd File Systems.
Feb  2 11:46:02 np0005605476 systemd: Stopped target Initrd Root File System.
Feb  2 11:46:02 np0005605476 systemd: Reached target Local Integrity Protected Volumes.
Feb  2 11:46:02 np0005605476 systemd: Reached target Path Units.
Feb  2 11:46:02 np0005605476 systemd: Reached target rpc_pipefs.target.
Feb  2 11:46:02 np0005605476 systemd: Reached target Slice Units.
Feb  2 11:46:02 np0005605476 systemd: Reached target Swaps.
Feb  2 11:46:02 np0005605476 systemd: Reached target Local Verity Protected Volumes.
Feb  2 11:46:02 np0005605476 systemd: Listening on RPCbind Server Activation Socket.
Feb  2 11:46:02 np0005605476 systemd: Reached target RPC Port Mapper.
Feb  2 11:46:02 np0005605476 systemd: Listening on Process Core Dump Socket.
Feb  2 11:46:02 np0005605476 systemd: Listening on initctl Compatibility Named Pipe.
Feb  2 11:46:02 np0005605476 systemd: Listening on udev Control Socket.
Feb  2 11:46:02 np0005605476 systemd: Listening on udev Kernel Socket.
Feb  2 11:46:02 np0005605476 systemd: Mounting Huge Pages File System...
Feb  2 11:46:02 np0005605476 systemd: Mounting POSIX Message Queue File System...
Feb  2 11:46:02 np0005605476 systemd: Mounting Kernel Debug File System...
Feb  2 11:46:02 np0005605476 systemd: Mounting Kernel Trace File System...
Feb  2 11:46:02 np0005605476 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 11:46:02 np0005605476 systemd: Starting Create List of Static Device Nodes...
Feb  2 11:46:02 np0005605476 systemd: Starting Load Kernel Module configfs...
Feb  2 11:46:02 np0005605476 systemd: Starting Load Kernel Module drm...
Feb  2 11:46:02 np0005605476 systemd: Starting Load Kernel Module efi_pstore...
Feb  2 11:46:02 np0005605476 systemd: Starting Load Kernel Module fuse...
Feb  2 11:46:02 np0005605476 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb  2 11:46:02 np0005605476 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd: Stopped File System Check on Root Device.
Feb  2 11:46:02 np0005605476 systemd: Stopped Journal Service.
Feb  2 11:46:02 np0005605476 systemd: Starting Journal Service...
Feb  2 11:46:02 np0005605476 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 11:46:02 np0005605476 systemd: Starting Generate network units from Kernel command line...
Feb  2 11:46:02 np0005605476 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 11:46:02 np0005605476 systemd: Starting Remount Root and Kernel File Systems...
Feb  2 11:46:02 np0005605476 systemd-journald[679]: Journal started
Feb  2 11:46:02 np0005605476 systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 11:46:02 np0005605476 systemd[1]: Queued start job for default target Multi-User System.
Feb  2 11:46:02 np0005605476 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb  2 11:46:02 np0005605476 systemd: Starting Apply Kernel Variables...
Feb  2 11:46:02 np0005605476 systemd: Starting Coldplug All udev Devices...
Feb  2 11:46:02 np0005605476 kernel: fuse: init (API version 7.37)
Feb  2 11:46:02 np0005605476 systemd: Started Journal Service.
Feb  2 11:46:02 np0005605476 systemd[1]: Mounted Huge Pages File System.
Feb  2 11:46:02 np0005605476 systemd[1]: Mounted POSIX Message Queue File System.
Feb  2 11:46:02 np0005605476 systemd[1]: Mounted Kernel Debug File System.
Feb  2 11:46:02 np0005605476 systemd[1]: Mounted Kernel Trace File System.
Feb  2 11:46:02 np0005605476 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Create List of Static Device Nodes.
Feb  2 11:46:02 np0005605476 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 11:46:02 np0005605476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb  2 11:46:02 np0005605476 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Load Kernel Module fuse.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Generate network units from Kernel command line.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Apply Kernel Variables.
Feb  2 11:46:02 np0005605476 kernel: ACPI: bus type drm_connector registered
Feb  2 11:46:02 np0005605476 systemd[1]: Mounting FUSE Control File System...
Feb  2 11:46:02 np0005605476 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Rebuild Hardware Database...
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb  2 11:46:02 np0005605476 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Load/Save OS Random Seed...
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Create System Users...
Feb  2 11:46:02 np0005605476 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Load Kernel Module drm.
Feb  2 11:46:02 np0005605476 systemd[1]: Mounted FUSE Control File System.
Feb  2 11:46:02 np0005605476 systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 11:46:02 np0005605476 systemd-journald[679]: Received client request to flush runtime journal.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Load/Save OS Random Seed.
Feb  2 11:46:02 np0005605476 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 11:46:02 np0005605476 systemd[1]: Finished Create System Users.
Feb  2 11:46:02 np0005605476 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target Preparation for Local File Systems.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target Local File Systems.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb  2 11:46:03 np0005605476 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb  2 11:46:03 np0005605476 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb  2 11:46:03 np0005605476 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Automatic Boot Loader Update...
Feb  2 11:46:03 np0005605476 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 11:46:03 np0005605476 bootctl[698]: Couldn't find EFI system partition, skipping.
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Automatic Boot Loader Update.
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Security Auditing Service...
Feb  2 11:46:03 np0005605476 systemd[1]: Starting RPC Bind...
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Rebuild Journal Catalog...
Feb  2 11:46:03 np0005605476 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb  2 11:46:03 np0005605476 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Rebuild Journal Catalog.
Feb  2 11:46:03 np0005605476 augenrules[709]: /sbin/augenrules: No change
Feb  2 11:46:03 np0005605476 systemd[1]: Started RPC Bind.
Feb  2 11:46:03 np0005605476 augenrules[724]: No rules
Feb  2 11:46:03 np0005605476 augenrules[724]: enabled 1
Feb  2 11:46:03 np0005605476 augenrules[724]: failure 1
Feb  2 11:46:03 np0005605476 augenrules[724]: pid 704
Feb  2 11:46:03 np0005605476 augenrules[724]: rate_limit 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_limit 8192
Feb  2 11:46:03 np0005605476 augenrules[724]: lost 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog 2
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time 60000
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time_actual 0
Feb  2 11:46:03 np0005605476 augenrules[724]: enabled 1
Feb  2 11:46:03 np0005605476 augenrules[724]: failure 1
Feb  2 11:46:03 np0005605476 augenrules[724]: pid 704
Feb  2 11:46:03 np0005605476 augenrules[724]: rate_limit 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_limit 8192
Feb  2 11:46:03 np0005605476 augenrules[724]: lost 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog 3
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time 60000
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time_actual 0
Feb  2 11:46:03 np0005605476 augenrules[724]: enabled 1
Feb  2 11:46:03 np0005605476 augenrules[724]: failure 1
Feb  2 11:46:03 np0005605476 augenrules[724]: pid 704
Feb  2 11:46:03 np0005605476 augenrules[724]: rate_limit 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_limit 8192
Feb  2 11:46:03 np0005605476 augenrules[724]: lost 0
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog 4
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time 60000
Feb  2 11:46:03 np0005605476 augenrules[724]: backlog_wait_time_actual 0
Feb  2 11:46:03 np0005605476 systemd[1]: Started Security Auditing Service.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Rebuild Hardware Database.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Update is Completed...
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Update is Completed.
Feb  2 11:46:03 np0005605476 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 11:46:03 np0005605476 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target System Initialization.
Feb  2 11:46:03 np0005605476 systemd[1]: Started dnf makecache --timer.
Feb  2 11:46:03 np0005605476 systemd[1]: Started Daily rotation of log files.
Feb  2 11:46:03 np0005605476 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target Timer Units.
Feb  2 11:46:03 np0005605476 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb  2 11:46:03 np0005605476 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target Socket Units.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting D-Bus System Message Bus...
Feb  2 11:46:03 np0005605476 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 11:46:03 np0005605476 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb  2 11:46:03 np0005605476 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 11:46:03 np0005605476 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 11:46:03 np0005605476 systemd[1]: Started D-Bus System Message Bus.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target Basic System.
Feb  2 11:46:03 np0005605476 dbus-broker-lau[770]: Ready
Feb  2 11:46:03 np0005605476 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb  2 11:46:03 np0005605476 systemd[1]: Starting NTP client/server...
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb  2 11:46:03 np0005605476 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  2 11:46:03 np0005605476 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb  2 11:46:03 np0005605476 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb  2 11:46:03 np0005605476 chronyd[789]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 11:46:03 np0005605476 chronyd[789]: Loaded 0 symmetric keys
Feb  2 11:46:03 np0005605476 chronyd[789]: Using right/UTC timezone to obtain leap second data
Feb  2 11:46:03 np0005605476 chronyd[789]: Loaded seccomp filter (level 2)
Feb  2 11:46:03 np0005605476 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb  2 11:46:03 np0005605476 systemd[1]: Starting IPv4 firewall with iptables...
Feb  2 11:46:03 np0005605476 systemd[1]: Started irqbalance daemon.
Feb  2 11:46:03 np0005605476 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb  2 11:46:03 np0005605476 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 11:46:03 np0005605476 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 11:46:03 np0005605476 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target sshd-keygen.target.
Feb  2 11:46:03 np0005605476 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb  2 11:46:03 np0005605476 systemd[1]: Reached target User and Group Name Lookups.
Feb  2 11:46:03 np0005605476 systemd[1]: Starting User Login Management...
Feb  2 11:46:03 np0005605476 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb  2 11:46:03 np0005605476 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb  2 11:46:03 np0005605476 kernel: kvm_amd: TSC scaling supported
Feb  2 11:46:03 np0005605476 kernel: kvm_amd: Nested Virtualization enabled
Feb  2 11:46:03 np0005605476 kernel: kvm_amd: Nested Paging enabled
Feb  2 11:46:03 np0005605476 kernel: kvm_amd: LBR virtualization supported
Feb  2 11:46:03 np0005605476 systemd[1]: Started NTP client/server.
Feb  2 11:46:03 np0005605476 kernel: Console: switching to colour dummy device 80x25
Feb  2 11:46:03 np0005605476 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb  2 11:46:03 np0005605476 kernel: [drm] features: -context_init
Feb  2 11:46:03 np0005605476 kernel: [drm] number of scanouts: 1
Feb  2 11:46:03 np0005605476 kernel: [drm] number of cap sets: 0
Feb  2 11:46:03 np0005605476 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb  2 11:46:03 np0005605476 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 11:46:03 np0005605476 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 11:46:03 np0005605476 systemd-logind[799]: New seat seat0.
Feb  2 11:46:03 np0005605476 systemd[1]: Started User Login Management.
Feb  2 11:46:03 np0005605476 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb  2 11:46:03 np0005605476 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb  2 11:46:03 np0005605476 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb  2 11:46:03 np0005605476 kernel: Console: switching to colour frame buffer device 128x48
Feb  2 11:46:03 np0005605476 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb  2 11:46:03 np0005605476 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb  2 11:46:03 np0005605476 iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Feb  2 11:46:03 np0005605476 systemd[1]: Finished IPv4 firewall with iptables.
Feb  2 11:46:04 np0005605476 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 02 Feb 2026 16:46:04 +0000. Up 6.16 seconds.
Feb  2 11:46:04 np0005605476 systemd[1]: run-cloud\x2dinit-tmp-tmp4379cgg0.mount: Deactivated successfully.
Feb  2 11:46:05 np0005605476 systemd[1]: Starting Hostname Service...
Feb  2 11:46:05 np0005605476 systemd[1]: Started Hostname Service.
Feb  2 11:46:05 np0005605476 systemd-hostnamed[855]: Hostname set to <np0005605476.novalocal> (static)
Feb  2 11:46:05 np0005605476 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb  2 11:46:05 np0005605476 systemd[1]: Reached target Preparation for Network.
Feb  2 11:46:05 np0005605476 systemd[1]: Starting Network Manager...
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3283] NetworkManager (version 1.54.3-2.el9) is starting... (boot:0643d1a6-a03b-4b72-b3df-32e467e2189e)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3288] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3434] manager[0x5646b3205000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3482] hostname: hostname: using hostnamed
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3483] hostname: static hostname changed from (none) to "np0005605476.novalocal"
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3488] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3611] manager[0x5646b3205000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3612] manager[0x5646b3205000]: rfkill: WWAN hardware radio set enabled
Feb  2 11:46:05 np0005605476 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3695] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3695] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3696] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3697] manager: Networking is enabled by state file
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3699] settings: Loaded settings plugin: keyfile (internal)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3727] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3751] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3765] dhcp: init: Using DHCP client 'internal'
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3772] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3786] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3799] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3812] device (lo): Activation: starting connection 'lo' (e73db372-d804-4746-a9fe-87478b72a50b)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3821] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3825] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3861] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3867] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3872] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3875] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3879] device (eth0): carrier: link connected
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3884] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3892] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3898] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3904] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3906] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3909] manager: NetworkManager state is now CONNECTING
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3912] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3919] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3924] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3968] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3974] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 11:46:05 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.3992] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 systemd[1]: Started Network Manager.
Feb  2 11:46:05 np0005605476 systemd[1]: Reached target Network.
Feb  2 11:46:05 np0005605476 systemd[1]: Starting Network Manager Wait Online...
Feb  2 11:46:05 np0005605476 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb  2 11:46:05 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4235] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 systemd[1]: Started GSSAPI Proxy Daemon.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4257] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4264] device (lo): Activation: successful, device activated.
Feb  2 11:46:05 np0005605476 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 11:46:05 np0005605476 systemd[1]: Reached target NFS client services.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4271] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4273] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4276] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4279] device (eth0): Activation: successful, device activated.
Feb  2 11:46:05 np0005605476 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4286] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 11:46:05 np0005605476 NetworkManager[859]: <info>  [1770050765.4291] manager: startup complete
Feb  2 11:46:05 np0005605476 systemd[1]: Reached target Remote File Systems.
Feb  2 11:46:05 np0005605476 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 11:46:05 np0005605476 systemd[1]: Finished Network Manager Wait Online.
Feb  2 11:46:05 np0005605476 systemd[1]: Starting Cloud-init: Network Stage...
Feb  2 11:46:05 np0005605476 cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 02 Feb 2026 16:46:05 +0000. Up 7.09 seconds.
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.189         | 255.255.255.0 | global | fa:16:3e:27:c1:3b |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe27:c13b/64 |       .       |  link  | fa:16:3e:27:c1:3b |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb  2 11:46:05 np0005605476 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 11:46:07 np0005605476 cloud-init[923]: Generating public/private rsa key pair.
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key fingerprint is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: SHA256:dYVtDWrgLt5HdI+w+xjQ5VVxZf5AVpaNBr9c+g821ew root@np0005605476.novalocal
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key's randomart image is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: +---[RSA 3072]----+
Feb  2 11:46:07 np0005605476 cloud-init[923]: |          . .+=*%|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |         . ..*===|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |          o *o= +|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |         o = B Xo|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |        S o + * *|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |       . o o . + |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |        . . + + E|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |           . = o.|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |            . . .|
Feb  2 11:46:07 np0005605476 cloud-init[923]: +----[SHA256]-----+
Feb  2 11:46:07 np0005605476 cloud-init[923]: Generating public/private ecdsa key pair.
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key fingerprint is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: SHA256:N+FauEdItZPEKBlEfZFmQj9cKjHhFdMIWwZlpTfA7hc root@np0005605476.novalocal
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key's randomart image is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: +---[ECDSA 256]---+
Feb  2 11:46:07 np0005605476 cloud-init[923]: |     o+=BO&Bo    |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |      o.+&OO.    |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |       .=*@ o    |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |       . =.=E.   |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |        S.*  .   |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |         *...    |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |        o ..     |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |         .       |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |                 |
Feb  2 11:46:07 np0005605476 cloud-init[923]: +----[SHA256]-----+
Feb  2 11:46:07 np0005605476 cloud-init[923]: Generating public/private ed25519 key pair.
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb  2 11:46:07 np0005605476 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key fingerprint is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: SHA256:BjgVQizvlJkOSiPxwwyA9lp8VNT7m+4taF5VMhH9Lt8 root@np0005605476.novalocal
Feb  2 11:46:07 np0005605476 cloud-init[923]: The key's randomart image is:
Feb  2 11:46:07 np0005605476 cloud-init[923]: +--[ED25519 256]--+
Feb  2 11:46:07 np0005605476 cloud-init[923]: |+  oo =+.    oo  |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |+.. .=   .    .. |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |.*oo++.   .  o ..|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |.o*+*o . .    + .|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |o.+*.   S .  . . |
Feb  2 11:46:07 np0005605476 cloud-init[923]: |..  o  .   .. . .|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |          ..o  o.|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |         o.+.   E|
Feb  2 11:46:07 np0005605476 cloud-init[923]: |        o.oo..   |
Feb  2 11:46:07 np0005605476 cloud-init[923]: +----[SHA256]-----+
Feb  2 11:46:07 np0005605476 systemd[1]: Finished Cloud-init: Network Stage.
Feb  2 11:46:07 np0005605476 systemd[1]: Reached target Cloud-config availability.
Feb  2 11:46:07 np0005605476 systemd[1]: Reached target Network is Online.
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Cloud-init: Config Stage...
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Crash recovery kernel arming...
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Notify NFS peers of a restart...
Feb  2 11:46:07 np0005605476 systemd[1]: Starting System Logging Service...
Feb  2 11:46:07 np0005605476 systemd[1]: Starting OpenSSH server daemon...
Feb  2 11:46:07 np0005605476 sm-notify[1005]: Version 2.5.4 starting
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Permit User Sessions...
Feb  2 11:46:07 np0005605476 systemd[1]: Started Notify NFS peers of a restart.
Feb  2 11:46:07 np0005605476 systemd[1]: Finished Permit User Sessions.
Feb  2 11:46:07 np0005605476 systemd[1]: Started OpenSSH server daemon.
Feb  2 11:46:07 np0005605476 systemd[1]: Started Command Scheduler.
Feb  2 11:46:07 np0005605476 systemd[1]: Started Getty on tty1.
Feb  2 11:46:07 np0005605476 systemd[1]: Started Serial Getty on ttyS0.
Feb  2 11:46:07 np0005605476 systemd[1]: Reached target Login Prompts.
Feb  2 11:46:07 np0005605476 systemd[1]: Started System Logging Service.
Feb  2 11:46:07 np0005605476 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Feb  2 11:46:07 np0005605476 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb  2 11:46:07 np0005605476 systemd[1]: Reached target Multi-User System.
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb  2 11:46:07 np0005605476 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  2 11:46:07 np0005605476 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb  2 11:46:07 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 11:46:07 np0005605476 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Feb  2 11:46:07 np0005605476 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb  2 11:46:07 np0005605476 cloud-init[1141]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 02 Feb 2026 16:46:07 +0000. Up 9.02 seconds.
Feb  2 11:46:07 np0005605476 systemd[1]: Finished Cloud-init: Config Stage.
Feb  2 11:46:07 np0005605476 systemd[1]: Starting Cloud-init: Final Stage...
Feb  2 11:46:07 np0005605476 dracut[1267]: dracut-057-102.git20250818.el9
Feb  2 11:46:08 np0005605476 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb  2 11:46:08 np0005605476 cloud-init[1328]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 02 Feb 2026 16:46:08 +0000. Up 9.44 seconds.
Feb  2 11:46:08 np0005605476 cloud-init[1339]: #############################################################
Feb  2 11:46:08 np0005605476 cloud-init[1340]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb  2 11:46:08 np0005605476 cloud-init[1342]: 256 SHA256:N+FauEdItZPEKBlEfZFmQj9cKjHhFdMIWwZlpTfA7hc root@np0005605476.novalocal (ECDSA)
Feb  2 11:46:08 np0005605476 cloud-init[1344]: 256 SHA256:BjgVQizvlJkOSiPxwwyA9lp8VNT7m+4taF5VMhH9Lt8 root@np0005605476.novalocal (ED25519)
Feb  2 11:46:08 np0005605476 cloud-init[1346]: 3072 SHA256:dYVtDWrgLt5HdI+w+xjQ5VVxZf5AVpaNBr9c+g821ew root@np0005605476.novalocal (RSA)
Feb  2 11:46:08 np0005605476 cloud-init[1349]: -----END SSH HOST KEY FINGERPRINTS-----
Feb  2 11:46:08 np0005605476 cloud-init[1351]: #############################################################
Feb  2 11:46:08 np0005605476 cloud-init[1328]: Cloud-init v. 24.4-8.el9 finished at Mon, 02 Feb 2026 16:46:08 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.60 seconds
Feb  2 11:46:08 np0005605476 systemd[1]: Finished Cloud-init: Final Stage.
Feb  2 11:46:08 np0005605476 systemd[1]: Reached target Cloud-init target.
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 11:46:08 np0005605476 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: memstrack is not available
Feb  2 11:46:09 np0005605476 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 11:46:09 np0005605476 dracut[1269]: memstrack is not available
Feb  2 11:46:09 np0005605476 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 11:46:09 np0005605476 dracut[1269]: *** Including module: systemd ***
Feb  2 11:46:09 np0005605476 dracut[1269]: *** Including module: fips ***
Feb  2 11:46:09 np0005605476 dracut[1269]: *** Including module: systemd-initrd ***
Feb  2 11:46:09 np0005605476 dracut[1269]: *** Including module: i18n ***
Feb  2 11:46:09 np0005605476 chronyd[789]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Feb  2 11:46:09 np0005605476 chronyd[789]: System clock TAI offset set to 37 seconds
Feb  2 11:46:09 np0005605476 dracut[1269]: *** Including module: drm ***
Feb  2 11:46:10 np0005605476 dracut[1269]: *** Including module: prefixdevname ***
Feb  2 11:46:10 np0005605476 dracut[1269]: *** Including module: kernel-modules ***
Feb  2 11:46:10 np0005605476 kernel: block vda: the capability attribute has been deprecated.
Feb  2 11:46:10 np0005605476 dracut[1269]: *** Including module: kernel-modules-extra ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: qemu ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: fstab-sys ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: rootfs-block ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: terminfo ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: udev-rules ***
Feb  2 11:46:11 np0005605476 dracut[1269]: Skipping udev rule: 91-permissions.rules
Feb  2 11:46:11 np0005605476 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: virtiofs ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: dracut-systemd ***
Feb  2 11:46:11 np0005605476 chronyd[789]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: usrmount ***
Feb  2 11:46:11 np0005605476 dracut[1269]: *** Including module: base ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: fs-lib ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: kdumpbase ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Feb  2 11:46:12 np0005605476 dracut[1269]:  microcode_ctl module: mangling fw_dir
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb  2 11:46:12 np0005605476 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: openssl ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: shutdown ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including module: squash ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Including modules done ***
Feb  2 11:46:12 np0005605476 dracut[1269]: *** Installing kernel module dependencies ***
Feb  2 11:46:13 np0005605476 dracut[1269]: *** Installing kernel module dependencies done ***
Feb  2 11:46:13 np0005605476 dracut[1269]: *** Resolving executable dependencies ***
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 25 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 25 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 31 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 31 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 28 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 28 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 32 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 32 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 30 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 30 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 irqbalance[795]: Cannot change IRQ 29 affinity: Operation not permitted
Feb  2 11:46:13 np0005605476 irqbalance[795]: IRQ 29 affinity is now unmanaged
Feb  2 11:46:13 np0005605476 chronyd[789]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Feb  2 11:46:14 np0005605476 dracut[1269]: *** Resolving executable dependencies done ***
Feb  2 11:46:14 np0005605476 dracut[1269]: *** Generating early-microcode cpio image ***
Feb  2 11:46:14 np0005605476 dracut[1269]: *** Store current command line parameters ***
Feb  2 11:46:14 np0005605476 dracut[1269]: Stored kernel commandline:
Feb  2 11:46:14 np0005605476 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Feb  2 11:46:14 np0005605476 dracut[1269]: *** Install squash loader ***
Feb  2 11:46:15 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 11:46:15 np0005605476 dracut[1269]: *** Squashing the files inside the initramfs ***
Feb  2 11:46:16 np0005605476 dracut[1269]: *** Squashing the files inside the initramfs done ***
Feb  2 11:46:16 np0005605476 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb  2 11:46:16 np0005605476 dracut[1269]: *** Hardlinking files ***
Feb  2 11:46:16 np0005605476 dracut[1269]: *** Hardlinking files done ***
Feb  2 11:46:17 np0005605476 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb  2 11:46:17 np0005605476 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Feb  2 11:46:17 np0005605476 kdumpctl[1019]: kdump: Starting kdump: [OK]
Feb  2 11:46:17 np0005605476 systemd[1]: Finished Crash recovery kernel arming.
Feb  2 11:46:17 np0005605476 systemd[1]: Startup finished in 1.236s (kernel) + 2.330s (initrd) + 15.523s (userspace) = 19.090s.
Feb  2 11:46:28 np0005605476 systemd-logind[799]: New session 1 of user zuul.
Feb  2 11:46:28 np0005605476 systemd[1]: Created slice User Slice of UID 1000.
Feb  2 11:46:28 np0005605476 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb  2 11:46:28 np0005605476 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb  2 11:46:28 np0005605476 systemd[1]: Starting User Manager for UID 1000...
Feb  2 11:46:29 np0005605476 systemd[4307]: Queued start job for default target Main User Target.
Feb  2 11:46:29 np0005605476 systemd[4307]: Created slice User Application Slice.
Feb  2 11:46:29 np0005605476 systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 11:46:29 np0005605476 systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 11:46:29 np0005605476 systemd[4307]: Reached target Paths.
Feb  2 11:46:29 np0005605476 systemd[4307]: Reached target Timers.
Feb  2 11:46:29 np0005605476 systemd[4307]: Starting D-Bus User Message Bus Socket...
Feb  2 11:46:29 np0005605476 systemd[4307]: Starting Create User's Volatile Files and Directories...
Feb  2 11:46:29 np0005605476 systemd[4307]: Listening on D-Bus User Message Bus Socket.
Feb  2 11:46:29 np0005605476 systemd[4307]: Reached target Sockets.
Feb  2 11:46:29 np0005605476 systemd[4307]: Finished Create User's Volatile Files and Directories.
Feb  2 11:46:29 np0005605476 systemd[4307]: Reached target Basic System.
Feb  2 11:46:29 np0005605476 systemd[4307]: Reached target Main User Target.
Feb  2 11:46:29 np0005605476 systemd[4307]: Startup finished in 128ms.
Feb  2 11:46:29 np0005605476 systemd[1]: Started User Manager for UID 1000.
Feb  2 11:46:29 np0005605476 systemd[1]: Started Session 1 of User zuul.
Feb  2 11:46:29 np0005605476 python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 11:46:32 np0005605476 python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 11:46:35 np0005605476 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 11:46:38 np0005605476 python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 11:46:39 np0005605476 python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb  2 11:46:40 np0005605476 python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKvFMz9GU3op72wWi5LdWjGhFvql0rQW7m4966NccDsh8vBe+WYUMx9pb575/tPT7ItW2GlnCI94uV/7/xw+be7n954iTt3RqdYKpIVAUyPh9PACsWpu4DW5Ub6rIovYDXMqWq1DZR+P6J+rgQT0IF/axveyZVwbBnpECcvSYTt5PGNdDId6Yl1388JkpyKHpwCycsVqZS9NLBopn4JGlFiOSN3wKclDg/0V3UdEYONyeqT69QZ9SQP16QumwbdufsZ5p0E3J0VUt0FPwQCATG2GFa0Seh9Vqbmz5PgWh+BODOKSBWdyKfvmwijtJa28sKkXhOLm8qNIs0Qy5NynEA09RDdxCIcvQka47KEKaBEkeWkqV+yIxMJFLkSJ7HYNKOTeWiYBY8/IC7yoTtrvpynL6BSlemf5+gWCdrxlxIxjazpaeEhRxDVLIx0/mmih6Zfvw2xaUffCtsAc/LknpZNAojKtZCQo6V8BrNJx2K2l6Z1NB3OM2yQ8k9mGz/rgs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:41 np0005605476 python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:41 np0005605476 python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:46:41 np0005605476 python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770050801.3459787-207-256523939282153/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0cc079e11ebd4977bcbb1e2aa624448e_id_rsa follow=False checksum=8584dfe8c6549a8e8d3247a887647a921e1ba368 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:42 np0005605476 python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:46:42 np0005605476 python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770050802.2615073-240-153686325336100/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0cc079e11ebd4977bcbb1e2aa624448e_id_rsa.pub follow=False checksum=4c73de3203ebc4cacbaec1bdac4c66edce82d5c9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:44 np0005605476 python3[4980]: ansible-ping Invoked with data=pong
Feb  2 11:46:45 np0005605476 python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 11:46:47 np0005605476 python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb  2 11:46:48 np0005605476 python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:48 np0005605476 python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:48 np0005605476 python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:48 np0005605476 python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:49 np0005605476 python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:49 np0005605476 python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:51 np0005605476 python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:51 np0005605476 python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:46:52 np0005605476 python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1770050811.3148413-21-130681744223467/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:46:52 np0005605476 python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:53 np0005605476 python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:53 np0005605476 python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:53 np0005605476 python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:54 np0005605476 python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:54 np0005605476 python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:54 np0005605476 python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:54 np0005605476 python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:55 np0005605476 python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:55 np0005605476 python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:55 np0005605476 python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:56 np0005605476 python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:56 np0005605476 python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:56 np0005605476 python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:56 np0005605476 python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:57 np0005605476 python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:57 np0005605476 python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:57 np0005605476 python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:57 np0005605476 python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:58 np0005605476 python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:58 np0005605476 python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:58 np0005605476 python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:58 np0005605476 python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:59 np0005605476 python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:59 np0005605476 python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:46:59 np0005605476 python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:47:02 np0005605476 python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 11:47:02 np0005605476 systemd[1]: Starting Time & Date Service...
Feb  2 11:47:02 np0005605476 systemd[1]: Started Time & Date Service.
Feb  2 11:47:02 np0005605476 systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Feb  2 11:47:03 np0005605476 python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:03 np0005605476 python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:47:03 np0005605476 python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1770050823.3362746-153-251812089994432/source _original_basename=tmpiy21_rp0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:04 np0005605476 python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:47:04 np0005605476 python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770050824.1968925-183-1073688781492/source _original_basename=tmpdps4xmhn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:05 np0005605476 python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:47:05 np0005605476 python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770050825.2698963-231-2968740142809/source _original_basename=tmpypo7n_k7 follow=False checksum=b9ea63fb38f50d3257ec076159ca59d9b4b7fe2c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:06 np0005605476 python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:47:06 np0005605476 python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:47:07 np0005605476 python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:47:07 np0005605476 python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1770050826.9099073-273-103602433239833/source _original_basename=tmp9op2pm36 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:08 np0005605476 python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4b40-c85a-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:47:08 np0005605476 python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4b40-c85a-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb  2 11:47:10 np0005605476 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:27 np0005605476 python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:47:32 np0005605476 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb  2 11:48:01 np0005605476 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb  2 11:48:01 np0005605476 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2629] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 11:48:01 np0005605476 systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2827] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2867] settings: (eth1): created default wired connection 'Wired connection 1'
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2873] device (eth1): carrier: link connected
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2876] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2885] policy: auto-activating connection 'Wired connection 1' (e18965a6-b5bf-33df-be23-78e096a981f9)
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2892] device (eth1): Activation: starting connection 'Wired connection 1' (e18965a6-b5bf-33df-be23-78e096a981f9)
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2893] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2898] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2903] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 11:48:01 np0005605476 NetworkManager[859]: <info>  [1770050881.2910] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:48:02 np0005605476 python3[6981]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8735-9f7c-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:48:12 np0005605476 python3[7061]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:48:12 np0005605476 python3[7134]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770050892.092155-102-165064246358556/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=993380a157491680025a20a6a51f6e39f0487315 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:48:13 np0005605476 python3[7184]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 11:48:13 np0005605476 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 11:48:13 np0005605476 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 11:48:13 np0005605476 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 11:48:13 np0005605476 systemd[1]: Stopping Network Manager...
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5073] caught SIGTERM, shutting down normally.
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5083] dhcp4 (eth0): canceled DHCP transaction
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5083] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5083] dhcp4 (eth0): state changed no lease
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5086] manager: NetworkManager state is now CONNECTING
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5231] dhcp4 (eth1): canceled DHCP transaction
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5231] dhcp4 (eth1): state changed no lease
Feb  2 11:48:13 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 11:48:13 np0005605476 NetworkManager[859]: <info>  [1770050893.5276] exiting (success)
Feb  2 11:48:13 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 11:48:13 np0005605476 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 11:48:13 np0005605476 systemd[1]: Stopped Network Manager.
Feb  2 11:48:13 np0005605476 systemd[1]: Starting Network Manager...
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.5728] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0643d1a6-a03b-4b72-b3df-32e467e2189e)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.5729] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.5780] manager[0x55f8693a1000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 11:48:13 np0005605476 systemd[1]: Starting Hostname Service...
Feb  2 11:48:13 np0005605476 systemd[1]: Started Hostname Service.
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6511] hostname: hostname: using hostnamed
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6514] hostname: static hostname changed from (none) to "np0005605476.novalocal"
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6521] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6527] manager[0x55f8693a1000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6528] manager[0x55f8693a1000]: rfkill: WWAN hardware radio set enabled
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6570] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6571] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6572] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6573] manager: Networking is enabled by state file
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6577] settings: Loaded settings plugin: keyfile (internal)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6582] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6626] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6639] dhcp: init: Using DHCP client 'internal'
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6644] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6652] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6660] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6671] device (lo): Activation: starting connection 'lo' (e73db372-d804-4746-a9fe-87478b72a50b)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6680] device (eth0): carrier: link connected
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6687] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6693] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6694] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6705] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6716] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6725] device (eth1): carrier: link connected
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6731] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6738] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e18965a6-b5bf-33df-be23-78e096a981f9) (indicated)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6738] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6744] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6753] device (eth1): Activation: starting connection 'Wired connection 1' (e18965a6-b5bf-33df-be23-78e096a981f9)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6759] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 11:48:13 np0005605476 systemd[1]: Started Network Manager.
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6764] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6768] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6770] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6773] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6778] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6782] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6786] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6792] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6805] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6809] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6820] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6825] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6845] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6852] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6860] device (lo): Activation: successful, device activated.
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6871] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6879] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6939] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 systemd[1]: Starting Network Manager Wait Online...
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6964] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6965] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6968] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6971] device (eth0): Activation: successful, device activated.
Feb  2 11:48:13 np0005605476 NetworkManager[7196]: <info>  [1770050893.6976] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 11:48:14 np0005605476 python3[7268]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8735-9f7c-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:48:23 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 11:48:40 np0005605476 systemd[4307]: Starting Mark boot as successful...
Feb  2 11:48:40 np0005605476 systemd[4307]: Finished Mark boot as successful.
Feb  2 11:48:43 np0005605476 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6173] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 11:48:58 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 11:48:58 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6477] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6479] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6484] device (eth1): Activation: successful, device activated.
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6490] manager: startup complete
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6492] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <warn>  [1770050938.6497] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6503] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 systemd[1]: Finished Network Manager Wait Online.
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6611] dhcp4 (eth1): canceled DHCP transaction
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6612] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6612] dhcp4 (eth1): state changed no lease
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6624] policy: auto-activating connection 'ci-private-network' (ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3)
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6628] device (eth1): Activation: starting connection 'ci-private-network' (ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3)
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6629] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6638] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6644] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6672] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6673] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 11:48:58 np0005605476 NetworkManager[7196]: <info>  [1770050938.6677] device (eth1): Activation: successful, device activated.
Feb  2 11:49:08 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 11:49:13 np0005605476 python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:49:13 np0005605476 python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770050953.143065-267-186098056267376/source _original_basename=tmpzh8d4rkp follow=False checksum=4afc33d8796c4d0a05d3c8aff74739aae3c20214 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:50:13 np0005605476 systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Feb  2 11:51:40 np0005605476 systemd[4307]: Created slice User Background Tasks Slice.
Feb  2 11:51:40 np0005605476 systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 11:51:40 np0005605476 systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 11:55:55 np0005605476 systemd-logind[799]: New session 3 of user zuul.
Feb  2 11:55:55 np0005605476 systemd[1]: Started Session 3 of User zuul.
Feb  2 11:55:56 np0005605476 python3[7511]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-d4f4-7d60-000000002169-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:55:56 np0005605476 python3[7540]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:56 np0005605476 python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:56 np0005605476 python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:57 np0005605476 python3[7618]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:57 np0005605476 python3[7644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:58 np0005605476 python3[7722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:55:58 np0005605476 python3[7795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770051357.9772513-495-79972427060446/source _original_basename=tmpb3zy00l4 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:55:59 np0005605476 python3[7845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 11:55:59 np0005605476 systemd[1]: Reloading.
Feb  2 11:55:59 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 11:56:01 np0005605476 python3[7901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb  2 11:56:01 np0005605476 python3[7927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:01 np0005605476 python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:02 np0005605476 python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:02 np0005605476 python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:03 np0005605476 python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-d4f4-7d60-000000002170-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:03 np0005605476 python3[8068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 11:56:05 np0005605476 systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Feb  2 11:56:05 np0005605476 systemd[1]: session-3.scope: Deactivated successfully.
Feb  2 11:56:05 np0005605476 systemd[1]: session-3.scope: Consumed 3.728s CPU time.
Feb  2 11:56:05 np0005605476 systemd-logind[799]: Removed session 3.
Feb  2 11:56:07 np0005605476 systemd-logind[799]: New session 4 of user zuul.
Feb  2 11:56:07 np0005605476 systemd[1]: Started Session 4 of User zuul.
Feb  2 11:56:07 np0005605476 python3[8102]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 11:56:13 np0005605476 setsebool[8140]: The virt_use_nfs policy boolean was changed to 1 by root
Feb  2 11:56:13 np0005605476 setsebool[8140]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb  2 11:56:24 np0005605476 kernel: SELinux:  Converting 385 SID table entries...
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 11:56:24 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  Converting 388 SID table entries...
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 11:56:33 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 11:56:51 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 11:56:51 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 11:56:52 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 11:56:52 np0005605476 systemd[1]: Reloading.
Feb  2 11:56:52 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 11:56:52 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 11:56:56 np0005605476 python3[13216]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-a945-e73e-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 11:56:56 np0005605476 kernel: evm: overlay not supported
Feb  2 11:56:56 np0005605476 systemd[4307]: Starting D-Bus User Message Bus...
Feb  2 11:56:56 np0005605476 dbus-broker-launch[13900]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb  2 11:56:56 np0005605476 dbus-broker-launch[13900]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb  2 11:56:56 np0005605476 systemd[4307]: Started D-Bus User Message Bus.
Feb  2 11:56:56 np0005605476 dbus-broker-lau[13900]: Ready
Feb  2 11:56:56 np0005605476 systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 11:56:56 np0005605476 systemd[4307]: Created slice Slice /user.
Feb  2 11:56:56 np0005605476 systemd[4307]: podman-13869.scope: unit configures an IP firewall, but not running as root.
Feb  2 11:56:56 np0005605476 systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Feb  2 11:56:56 np0005605476 systemd[4307]: Started podman-13869.scope.
Feb  2 11:56:57 np0005605476 systemd[4307]: Started podman-pause-6e894c72.scope.
Feb  2 11:56:57 np0005605476 python3[14059]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.46:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.46:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:56:57 np0005605476 python3[14059]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb  2 11:56:58 np0005605476 systemd[1]: session-4.scope: Deactivated successfully.
Feb  2 11:56:58 np0005605476 systemd[1]: session-4.scope: Consumed 41.818s CPU time.
Feb  2 11:56:58 np0005605476 systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Feb  2 11:56:58 np0005605476 systemd-logind[799]: Removed session 4.
Feb  2 11:57:13 np0005605476 irqbalance[795]: Cannot change IRQ 27 affinity: Operation not permitted
Feb  2 11:57:13 np0005605476 irqbalance[795]: IRQ 27 affinity is now unmanaged
Feb  2 11:57:19 np0005605476 systemd-logind[799]: New session 5 of user zuul.
Feb  2 11:57:19 np0005605476 systemd[1]: Started Session 5 of User zuul.
Feb  2 11:57:19 np0005605476 python3[25494]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHocsnJ71ESjNDmn4q9xI3zr0Wv4vOoeH3MtL1tGvABiWsyqBZt2kDB17wc3TE0og/gE9pxtntbVmdSy6ZyvG50= zuul@np0005605475.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:57:20 np0005605476 python3[25728]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHocsnJ71ESjNDmn4q9xI3zr0Wv4vOoeH3MtL1tGvABiWsyqBZt2kDB17wc3TE0og/gE9pxtntbVmdSy6ZyvG50= zuul@np0005605475.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:57:20 np0005605476 python3[26197]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005605476.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb  2 11:57:21 np0005605476 python3[26454]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHocsnJ71ESjNDmn4q9xI3zr0Wv4vOoeH3MtL1tGvABiWsyqBZt2kDB17wc3TE0og/gE9pxtntbVmdSy6ZyvG50= zuul@np0005605475.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 11:57:21 np0005605476 python3[26744]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 11:57:22 np0005605476 python3[27033]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770051441.4236178-135-89492016863162/source _original_basename=tmpj6d515ww follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 11:57:22 np0005605476 python3[27418]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb  2 11:57:22 np0005605476 systemd[1]: Starting Hostname Service...
Feb  2 11:57:22 np0005605476 systemd[1]: Started Hostname Service.
Feb  2 11:57:22 np0005605476 systemd-hostnamed[27529]: Changed pretty hostname to 'compute-0'
Feb  2 11:57:22 np0005605476 systemd-hostnamed[27529]: Hostname set to <compute-0> (static)
Feb  2 11:57:22 np0005605476 NetworkManager[7196]: <info>  [1770051442.8747] hostname: static hostname changed from "np0005605476.novalocal" to "compute-0"
Feb  2 11:57:22 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 11:57:22 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 11:57:23 np0005605476 systemd[1]: session-5.scope: Deactivated successfully.
Feb  2 11:57:23 np0005605476 systemd[1]: session-5.scope: Consumed 1.890s CPU time.
Feb  2 11:57:23 np0005605476 systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Feb  2 11:57:23 np0005605476 systemd-logind[799]: Removed session 5.
Feb  2 11:57:29 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 11:57:29 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 11:57:29 np0005605476 systemd[1]: man-db-cache-update.service: Consumed 41.675s CPU time.
Feb  2 11:57:29 np0005605476 systemd[1]: run-r48b251d5347245a492e0a0cb42f4c637.service: Deactivated successfully.
Feb  2 11:57:32 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 11:57:52 np0005605476 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 12:01:08 np0005605476 systemd[1]: Starting Cleanup of Temporary Directories...
Feb  2 12:01:08 np0005605476 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb  2 12:01:08 np0005605476 systemd[1]: Finished Cleanup of Temporary Directories.
Feb  2 12:01:08 np0005605476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb  2 12:01:44 np0005605476 systemd-logind[799]: New session 6 of user zuul.
Feb  2 12:01:44 np0005605476 systemd[1]: Started Session 6 of User zuul.
Feb  2 12:01:44 np0005605476 python3[30080]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:01:46 np0005605476 python3[30196]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:46 np0005605476 python3[30269]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:47 np0005605476 python3[30295]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:47 np0005605476 python3[30368]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:47 np0005605476 python3[30394]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:47 np0005605476 python3[30467]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:48 np0005605476 python3[30493]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:48 np0005605476 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:48 np0005605476 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:48 np0005605476 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:49 np0005605476 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:49 np0005605476 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:01:49 np0005605476 python3[30790]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:01:49 np0005605476 python3[30863]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770051706.1368694-33739-197580515770195/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:02:01 np0005605476 python3[30921]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:07:00 np0005605476 systemd[1]: session-6.scope: Deactivated successfully.
Feb  2 12:07:00 np0005605476 systemd[1]: session-6.scope: Consumed 4.230s CPU time.
Feb  2 12:07:00 np0005605476 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Feb  2 12:07:00 np0005605476 systemd-logind[799]: Removed session 6.
Feb  2 12:12:49 np0005605476 systemd-logind[799]: New session 7 of user zuul.
Feb  2 12:12:49 np0005605476 systemd[1]: Started Session 7 of User zuul.
Feb  2 12:12:49 np0005605476 python3.9[31086]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:12:51 np0005605476 python3.9[31267]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:12:59 np0005605476 systemd[1]: session-7.scope: Deactivated successfully.
Feb  2 12:12:59 np0005605476 systemd[1]: session-7.scope: Consumed 7.479s CPU time.
Feb  2 12:12:59 np0005605476 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Feb  2 12:12:59 np0005605476 systemd-logind[799]: Removed session 7.
Feb  2 12:13:14 np0005605476 systemd-logind[799]: New session 8 of user zuul.
Feb  2 12:13:14 np0005605476 systemd[1]: Started Session 8 of User zuul.
Feb  2 12:13:15 np0005605476 python3.9[31478]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 12:13:16 np0005605476 python3.9[31652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:13:17 np0005605476 python3.9[31804]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:13:17 np0005605476 python3.9[31957]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:13:18 np0005605476 python3.9[32109]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:13:19 np0005605476 python3.9[32261]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:13:19 np0005605476 python3.9[32384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052398.7906587-68-124340074371810/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:13:20 np0005605476 python3.9[32536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:13:21 np0005605476 python3.9[32692]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:13:21 np0005605476 python3.9[32844]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:13:22 np0005605476 python3.9[32994]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:13:26 np0005605476 python3.9[33247]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:13:26 np0005605476 python3.9[33397]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:13:27 np0005605476 python3.9[33551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:13:28 np0005605476 python3.9[33709]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:13:29 np0005605476 python3.9[33793]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:14:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:14:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:14:16 np0005605476 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb  2 12:14:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:14:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:14:16 np0005605476 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb  2 12:14:16 np0005605476 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb  2 12:14:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:14:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:14:16 np0005605476 systemd[1]: Listening on LVM2 poll daemon socket.
Feb  2 12:14:16 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:14:16 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:14:16 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:15:11 np0005605476 kernel: SELinux:  Converting 2726 SID table entries...
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:15:11 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:15:11 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb  2 12:15:11 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:15:11 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:15:11 np0005605476 systemd[1]: Reloading.
Feb  2 12:15:11 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:15:11 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:15:12 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:15:12 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:15:12 np0005605476 systemd[1]: run-rc79555065e694f96891696ad3cd06c1c.service: Deactivated successfully.
Feb  2 12:15:12 np0005605476 python3.9[35301]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:14 np0005605476 python3.9[35582]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 12:15:15 np0005605476 python3.9[35734]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 12:15:17 np0005605476 python3.9[35887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:15:18 np0005605476 python3.9[36039]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 12:15:19 np0005605476 python3.9[36191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:15:19 np0005605476 python3.9[36343]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:15:20 np0005605476 python3.9[36466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052519.1987684-231-235897377317598/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:15:20 np0005605476 python3.9[36618]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:15:23 np0005605476 python3.9[36770]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:24 np0005605476 python3.9[36923]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:15:25 np0005605476 python3.9[37075]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 12:15:25 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:15:25 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:15:26 np0005605476 python3.9[37229]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:15:27 np0005605476 python3.9[37387]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 12:15:27 np0005605476 python3.9[37547]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 12:15:28 np0005605476 python3.9[37700]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:15:29 np0005605476 python3.9[37858]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 12:15:29 np0005605476 python3.9[38010]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:15:31 np0005605476 python3.9[38164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:15:32 np0005605476 python3.9[38316]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:15:32 np0005605476 python3.9[38439]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770052532.0811572-350-184786785378594/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:15:33 np0005605476 python3.9[38591]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:15:33 np0005605476 systemd[1]: Starting Load Kernel Modules...
Feb  2 12:15:33 np0005605476 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  2 12:15:33 np0005605476 kernel: Bridge firewalling registered
Feb  2 12:15:33 np0005605476 systemd-modules-load[38595]: Inserted module 'br_netfilter'
Feb  2 12:15:33 np0005605476 systemd[1]: Finished Load Kernel Modules.
Feb  2 12:15:34 np0005605476 python3.9[38752]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:15:35 np0005605476 python3.9[38875]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770052534.139096-373-124678617432301/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:15:35 np0005605476 python3.9[39027]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:15:39 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:15:39 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:15:39 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:15:39 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:15:39 np0005605476 systemd[1]: Reloading.
Feb  2 12:15:39 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:15:40 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:15:41 np0005605476 python3.9[40709]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:15:42 np0005605476 python3.9[41881]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 12:15:42 np0005605476 python3.9[42764]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:15:43 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:15:43 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:15:43 np0005605476 systemd[1]: man-db-cache-update.service: Consumed 3.352s CPU time.
Feb  2 12:15:43 np0005605476 systemd[1]: run-r0ea87d0ffa9b44819195e6bbee41bd7f.service: Deactivated successfully.
Feb  2 12:15:43 np0005605476 python3.9[43240]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:43 np0005605476 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 12:15:43 np0005605476 systemd[1]: Starting Authorization Manager...
Feb  2 12:15:43 np0005605476 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 12:15:43 np0005605476 polkitd[43458]: Started polkitd version 0.117
Feb  2 12:15:43 np0005605476 systemd[1]: Started Authorization Manager.
Feb  2 12:15:44 np0005605476 python3.9[43628]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:15:44 np0005605476 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 12:15:44 np0005605476 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 12:15:44 np0005605476 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 12:15:44 np0005605476 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 12:15:44 np0005605476 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 12:15:45 np0005605476 python3.9[43790]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 12:15:47 np0005605476 python3.9[43942]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:15:47 np0005605476 systemd[1]: Reloading.
Feb  2 12:15:47 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:15:48 np0005605476 python3.9[44131]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:15:48 np0005605476 systemd[1]: Reloading.
Feb  2 12:15:48 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:15:48 np0005605476 systemd[1]: Starting dnf makecache...
Feb  2 12:15:48 np0005605476 dnf[44169]: Failed determining last makecache time.
Feb  2 12:15:48 np0005605476 dnf[44169]: delorean-openstack-barbican-42b4c41831408a8e323 117 kB/s | 3.0 kB     00:00
Feb  2 12:15:48 np0005605476 dnf[44169]: delorean-python-glean-642fffe0203a8ffcc2443db52 180 kB/s | 3.0 kB     00:00
Feb  2 12:15:48 np0005605476 dnf[44169]: delorean-openstack-cinder-1c00d6490d88e436f26ef 194 kB/s | 3.0 kB     00:00
Feb  2 12:15:48 np0005605476 dnf[44169]: delorean-python-stevedore-c4acc5639fd2329372142 198 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 python3.9[44326]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-python-cloudkitty-tests-tempest-783703 6.0 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-diskimage-builder-61b717cc45660834fe9a 177 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-nova-eaa65f0b85123a4ee343246 208 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-python-designate-tests-tempest-347fdbc 200 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-glance-1fd12c29b339f30fe823e 189 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 179 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-manila-d783d10e75495b73866db 192 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-neutron-95cadbd379667c8520c8 200 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-octavia-5975097dd4b021385178 199 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-watcher-c014f81a8647287f6dcc 184 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-python-tcib-78032d201b02cee27e8e644c61 174 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 191 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-swift-dc98a8463506ac520c469a 197 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-python-tempestconf-8515371b7cceebd4282 191 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: delorean-openstack-heat-ui-013accbfd179753bc3f0 202 kB/s | 3.0 kB     00:00
Feb  2 12:15:49 np0005605476 python3.9[44490]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:49 np0005605476 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb  2 12:15:49 np0005605476 dnf[44169]: CentOS Stream 9 - BaseOS                         52 kB/s | 6.7 kB     00:00
Feb  2 12:15:49 np0005605476 dnf[44169]: CentOS Stream 9 - AppStream                      57 kB/s | 6.8 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: CentOS Stream 9 - CRB                            29 kB/s | 6.6 kB     00:00
Feb  2 12:15:50 np0005605476 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:50 np0005605476 dnf[44169]: CentOS Stream 9 - Extras packages                58 kB/s | 7.3 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: dlrn-antelope-testing                           165 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: dlrn-antelope-build-deps                        182 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: centos9-rabbitmq                                111 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: centos9-storage                                 127 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: centos9-opstools                                 49 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: NFV SIG OpenvSwitch                              83 kB/s | 3.0 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: repo-setup-centos-appstream                      97 kB/s | 4.4 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: repo-setup-centos-baseos                         76 kB/s | 3.9 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: repo-setup-centos-highavailability              144 kB/s | 3.9 kB     00:00
Feb  2 12:15:50 np0005605476 dnf[44169]: repo-setup-centos-powertools                    171 kB/s | 4.3 kB     00:00
Feb  2 12:15:51 np0005605476 dnf[44169]: Extra Packages for Enterprise Linux 9 - x86_64  102 kB/s |  31 kB     00:00
Feb  2 12:15:51 np0005605476 dnf[44169]: Metadata cache created.
Feb  2 12:15:51 np0005605476 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb  2 12:15:51 np0005605476 systemd[1]: Finished dnf makecache.
Feb  2 12:15:51 np0005605476 systemd[1]: dnf-makecache.service: Consumed 1.794s CPU time.
Feb  2 12:15:52 np0005605476 python3.9[44831]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:15:52 np0005605476 python3.9[44984]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:15:52 np0005605476 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 12:15:52 np0005605476 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 12:15:52 np0005605476 systemd[1]: Stopping Apply Kernel Variables...
Feb  2 12:15:52 np0005605476 systemd[1]: Starting Apply Kernel Variables...
Feb  2 12:15:52 np0005605476 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  2 12:15:52 np0005605476 systemd[1]: Finished Apply Kernel Variables.
Feb  2 12:15:53 np0005605476 systemd[1]: session-8.scope: Deactivated successfully.
Feb  2 12:15:53 np0005605476 systemd[1]: session-8.scope: Consumed 2min 571ms CPU time.
Feb  2 12:15:53 np0005605476 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Feb  2 12:15:53 np0005605476 systemd-logind[799]: Removed session 8.
Feb  2 12:15:58 np0005605476 systemd-logind[799]: New session 9 of user zuul.
Feb  2 12:15:58 np0005605476 systemd[1]: Started Session 9 of User zuul.
Feb  2 12:15:59 np0005605476 python3.9[45167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:16:00 np0005605476 python3.9[45323]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 12:16:01 np0005605476 python3.9[45476]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:16:02 np0005605476 python3.9[45634]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 12:16:02 np0005605476 python3.9[45794]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:16:03 np0005605476 python3.9[45878]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 12:16:07 np0005605476 python3.9[46042]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:16:13 np0005605476 irqbalance[795]: Cannot change IRQ 26 affinity: Operation not permitted
Feb  2 12:16:13 np0005605476 irqbalance[795]: IRQ 26 affinity is now unmanaged
Feb  2 12:16:17 np0005605476 kernel: SELinux:  Converting 2739 SID table entries...
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:16:17 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:16:18 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb  2 12:16:18 np0005605476 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb  2 12:16:19 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:16:19 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:16:19 np0005605476 systemd[1]: Reloading.
Feb  2 12:16:19 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:16:19 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:16:19 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:16:20 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:16:20 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:16:20 np0005605476 systemd[1]: run-r9f17515ba96445edb7e1435e1f784e28.service: Deactivated successfully.
Feb  2 12:16:21 np0005605476 python3.9[47141]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:16:21 np0005605476 systemd[1]: Reloading.
Feb  2 12:16:21 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:16:21 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:16:21 np0005605476 systemd[1]: Starting Open vSwitch Database Unit...
Feb  2 12:16:21 np0005605476 chown[47183]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb  2 12:16:21 np0005605476 ovs-ctl[47188]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb  2 12:16:21 np0005605476 ovs-ctl[47188]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-ctl[47188]: Starting ovsdb-server [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-vsctl[47237]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb  2 12:16:21 np0005605476 ovs-vsctl[47257]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"13051b64-c07e-4136-ad5c-993d3a84d93c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb  2 12:16:21 np0005605476 ovs-ctl[47188]: Configuring Open vSwitch system IDs [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-ctl[47188]: Enabling remote OVSDB managers [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-vsctl[47263]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 12:16:21 np0005605476 systemd[1]: Started Open vSwitch Database Unit.
Feb  2 12:16:21 np0005605476 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb  2 12:16:21 np0005605476 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb  2 12:16:21 np0005605476 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb  2 12:16:21 np0005605476 kernel: openvswitch: Open vSwitch switching datapath
Feb  2 12:16:21 np0005605476 ovs-ctl[47307]: Inserting openvswitch module [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-ctl[47276]: Starting ovs-vswitchd [  OK  ]
Feb  2 12:16:21 np0005605476 ovs-ctl[47276]: Enabling remote OVSDB managers [  OK  ]
Feb  2 12:16:21 np0005605476 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb  2 12:16:21 np0005605476 ovs-vsctl[47325]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 12:16:21 np0005605476 systemd[1]: Starting Open vSwitch...
Feb  2 12:16:21 np0005605476 systemd[1]: Finished Open vSwitch.
Feb  2 12:16:22 np0005605476 python3.9[47476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:16:23 np0005605476 python3.9[47628]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 12:16:24 np0005605476 kernel: SELinux:  Converting 2753 SID table entries...
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:16:24 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:16:25 np0005605476 python3.9[47783]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:16:25 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb  2 12:16:26 np0005605476 python3.9[47941]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:16:28 np0005605476 python3.9[48094]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:16:29 np0005605476 python3.9[48381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 12:16:30 np0005605476 python3.9[48531]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:16:31 np0005605476 python3.9[48685]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:16:32 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:16:33 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:16:33 np0005605476 systemd[1]: Reloading.
Feb  2 12:16:33 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:16:33 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:16:33 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:16:33 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:16:33 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:16:33 np0005605476 systemd[1]: run-ra878329ff3d64fe3b260e47632360bb3.service: Deactivated successfully.
Feb  2 12:16:34 np0005605476 python3.9[49004]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:16:34 np0005605476 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 12:16:34 np0005605476 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 12:16:34 np0005605476 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 12:16:34 np0005605476 systemd[1]: Stopping Network Manager...
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.2967] caught SIGTERM, shutting down normally.
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.2986] dhcp4 (eth0): canceled DHCP transaction
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.2986] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.2986] dhcp4 (eth0): state changed no lease
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.2989] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 12:16:34 np0005605476 NetworkManager[7196]: <info>  [1770052594.3110] exiting (success)
Feb  2 12:16:34 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 12:16:34 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 12:16:34 np0005605476 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 12:16:34 np0005605476 systemd[1]: Stopped Network Manager.
Feb  2 12:16:34 np0005605476 systemd[1]: NetworkManager.service: Consumed 12.008s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Feb  2 12:16:34 np0005605476 systemd[1]: Starting Network Manager...
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.3943] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0643d1a6-a03b-4b72-b3df-32e467e2189e)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.3945] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.3988] manager[0x55eb2e2e7000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 12:16:34 np0005605476 systemd[1]: Starting Hostname Service...
Feb  2 12:16:34 np0005605476 systemd[1]: Started Hostname Service.
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4671] hostname: hostname: using hostnamed
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4672] hostname: static hostname changed from (none) to "compute-0"
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4677] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4681] manager[0x55eb2e2e7000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4681] manager[0x55eb2e2e7000]: rfkill: WWAN hardware radio set enabled
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4703] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4713] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4713] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4714] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4715] manager: Networking is enabled by state file
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4717] settings: Loaded settings plugin: keyfile (internal)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4721] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4747] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4757] dhcp: init: Using DHCP client 'internal'
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4760] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4769] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4774] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4781] device (lo): Activation: starting connection 'lo' (e73db372-d804-4746-a9fe-87478b72a50b)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4787] device (eth0): carrier: link connected
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4790] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4795] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4795] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4801] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4807] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4812] device (eth1): carrier: link connected
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4814] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4818] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3) (indicated)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4819] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4823] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4829] device (eth1): Activation: starting connection 'ci-private-network' (ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4834] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 12:16:34 np0005605476 systemd[1]: Started Network Manager.
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4841] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4843] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4857] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4861] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4865] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4868] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4871] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4875] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4882] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4886] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4899] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4913] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4926] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4928] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4931] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4938] device (lo): Activation: successful, device activated.
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.4951] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5026] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 systemd[1]: Starting Network Manager Wait Online...
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5033] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5037] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5040] manager: NetworkManager state is now CONNECTED_LOCAL
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5042] device (eth1): Activation: successful, device activated.
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5053] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5056] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5059] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5062] device (eth0): Activation: successful, device activated.
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5067] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 12:16:34 np0005605476 NetworkManager[49022]: <info>  [1770052594.5071] manager: startup complete
Feb  2 12:16:34 np0005605476 systemd[1]: Finished Network Manager Wait Online.
Feb  2 12:16:35 np0005605476 python3.9[49231]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:16:39 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:16:39 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:16:40 np0005605476 systemd[1]: Reloading.
Feb  2 12:16:40 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:16:40 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:16:40 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:16:40 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:16:40 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:16:40 np0005605476 systemd[1]: run-rbc2bd1fc2ab34df5a0053814ea219c19.service: Deactivated successfully.
Feb  2 12:16:41 np0005605476 python3.9[49691]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:16:42 np0005605476 python3.9[49843]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:42 np0005605476 python3.9[49997]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:43 np0005605476 python3.9[50149]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:44 np0005605476 python3.9[50301]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:44 np0005605476 python3.9[50453]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:44 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 12:16:45 np0005605476 python3.9[50605]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:16:45 np0005605476 python3.9[50729]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052604.6955862-224-9211800282556/.source _original_basename=.3kuryol3 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:46 np0005605476 python3.9[50881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:46 np0005605476 python3.9[51033]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb  2 12:16:47 np0005605476 python3.9[51185]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:49 np0005605476 python3.9[51612]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb  2 12:16:50 np0005605476 ansible-async_wrapper.py[51787]: Invoked with j868347753021 300 /home/zuul/.ansible/tmp/ansible-tmp-1770052609.5062516-290-176463218179500/AnsiballZ_edpm_os_net_config.py _
Feb  2 12:16:50 np0005605476 ansible-async_wrapper.py[51790]: Starting module and watcher
Feb  2 12:16:50 np0005605476 ansible-async_wrapper.py[51790]: Start watching 51791 (300)
Feb  2 12:16:50 np0005605476 ansible-async_wrapper.py[51791]: Start module (51791)
Feb  2 12:16:50 np0005605476 ansible-async_wrapper.py[51787]: Return async_wrapper task started.
Feb  2 12:16:50 np0005605476 python3.9[51792]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb  2 12:16:51 np0005605476 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb  2 12:16:51 np0005605476 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb  2 12:16:51 np0005605476 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb  2 12:16:51 np0005605476 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb  2 12:16:51 np0005605476 kernel: cfg80211: failed to load regulatory.db
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.2882] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.2906] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3336] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3337] audit: op="connection-add" uuid="88dafbcf-a99d-4d8d-8656-bcee9df326da" name="br-ex-br" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3350] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3351] audit: op="connection-add" uuid="2c8d1de1-361d-4006-8b10-bd42ce1a9f30" name="br-ex-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3362] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3363] audit: op="connection-add" uuid="8acfb3bf-01b0-488b-8a56-215f50c00c5e" name="eth1-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3372] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3373] audit: op="connection-add" uuid="c51b93f4-c861-4f83-b0e0-fc4ba3295cec" name="vlan20-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3383] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3384] audit: op="connection-add" uuid="9595164e-39be-4740-b132-886bf32ea77f" name="vlan21-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3396] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3397] audit: op="connection-add" uuid="5664e705-59b2-401c-a753-51423fe272f0" name="vlan22-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3407] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3408] audit: op="connection-add" uuid="dd393d8b-4790-434b-b3a5-325e05185fca" name="vlan23-port" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3424] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3439] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.3440] audit: op="connection-add" uuid="a915a1d6-c660-40cd-ba9e-5fb4cc53ca93" name="br-ex-if" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4331] audit: op="connection-update" uuid="ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.routes,connection.port-type,connection.slave-type,connection.timestamp,connection.master,connection.controller,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.routes" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4352] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4354] audit: op="connection-add" uuid="1b68efa1-5d73-4f39-986e-4b0d3a68d43b" name="vlan20-if" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4370] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4372] audit: op="connection-add" uuid="c1183ddb-3bc3-4a47-842b-e7301930056d" name="vlan21-if" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4386] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4387] audit: op="connection-add" uuid="ba07e720-3cc0-450d-8f97-328299836fde" name="vlan22-if" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4402] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4404] audit: op="connection-add" uuid="8a748854-4e70-47e2-aa84-d866ea9e5da3" name="vlan23-if" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4415] audit: op="connection-delete" uuid="e18965a6-b5bf-33df-be23-78e096a981f9" name="Wired connection 1" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4425] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4427] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4433] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4436] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (88dafbcf-a99d-4d8d-8656-bcee9df326da)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4437] audit: op="connection-activate" uuid="88dafbcf-a99d-4d8d-8656-bcee9df326da" name="br-ex-br" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4439] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4439] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4444] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4448] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (2c8d1de1-361d-4006-8b10-bd42ce1a9f30)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4449] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4450] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4453] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4455] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (8acfb3bf-01b0-488b-8a56-215f50c00c5e)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4457] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4457] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4461] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4464] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (c51b93f4-c861-4f83-b0e0-fc4ba3295cec)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4465] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4466] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4469] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4473] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9595164e-39be-4740-b132-886bf32ea77f)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4474] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4475] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4480] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4484] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5664e705-59b2-401c-a753-51423fe272f0)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4485] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4486] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4491] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4494] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (dd393d8b-4790-434b-b3a5-325e05185fca)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4495] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4497] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4499] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4505] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4506] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4509] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4512] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a915a1d6-c660-40cd-ba9e-5fb4cc53ca93)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4513] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4516] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4518] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4519] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4520] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4528] device (eth1): disconnecting for new activation request.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4529] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4532] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4533] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4534] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4537] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4538] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4541] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4547] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1b68efa1-5d73-4f39-986e-4b0d3a68d43b)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4547] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4550] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4552] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4553] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4555] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4556] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4559] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4562] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c1183ddb-3bc3-4a47-842b-e7301930056d)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4563] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4566] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4567] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4569] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4571] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4572] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4575] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4579] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ba07e720-3cc0-450d-8f97-328299836fde)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4579] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4582] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4584] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4585] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4587] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <warn>  [1770052612.4589] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4592] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4596] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (8a748854-4e70-47e2-aa84-d866ea9e5da3)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4596] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4599] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4601] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4602] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4603] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4614] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,802-3-ethernet.mtu" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4616] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4619] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4621] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4626] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4630] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4633] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4636] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4638] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 kernel: ovs-system: entered promiscuous mode
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4643] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4653] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4656] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4657] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 systemd-udevd[51797]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4662] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4668] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4672] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4674] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4679] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4684] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4688] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4689] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4694] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4698] dhcp4 (eth0): canceled DHCP transaction
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4698] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4699] dhcp4 (eth0): state changed no lease
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4700] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4843] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4847] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51793 uid=0 result="fail" reason="Device is not activated"
Feb  2 12:16:52 np0005605476 kernel: Timeout policy base is empty
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.4853] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 12:16:52 np0005605476 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 12:16:52 np0005605476 kernel: br-ex: entered promiscuous mode
Feb  2 12:16:52 np0005605476 kernel: vlan20: entered promiscuous mode
Feb  2 12:16:52 np0005605476 kernel: vlan21: entered promiscuous mode
Feb  2 12:16:52 np0005605476 systemd-udevd[51799]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6011] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6015] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6024] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6031] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6036] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6040] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb  2 12:16:52 np0005605476 kernel: vlan22: entered promiscuous mode
Feb  2 12:16:52 np0005605476 kernel: vlan23: entered promiscuous mode
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6721] device (eth1): disconnecting for new activation request.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6721] audit: op="connection-activate" uuid="ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3" name="ci-private-network" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6721] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6880] device (eth1): Activation: starting connection 'ci-private-network' (ffe0e3fd-4ab5-587e-9bf9-f52fc90282b3)
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6886] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6888] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6889] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6891] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6893] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6894] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6896] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6901] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6929] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6936] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6937] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6941] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6948] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6954] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6960] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6965] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6970] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6975] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6980] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6985] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6990] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.6995] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7000] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7005] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7010] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7015] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7020] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51793 uid=0 result="success"
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7046] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7058] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7061] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7080] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7093] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7099] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7108] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7119] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7121] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7125] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7133] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7138] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7145] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7150] device (eth1): Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7156] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7157] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7158] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7160] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7163] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7168] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7173] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7177] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7181] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7186] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7187] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 12:16:52 np0005605476 NetworkManager[49022]: <info>  [1770052612.7195] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 12:16:53 np0005605476 NetworkManager[49022]: <info>  [1770052613.8899] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.0424] checkpoint[0x55eb2e2bc950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.0427] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 python3.9[52150]: ansible-ansible.legacy.async_status Invoked with jid=j868347753021.51787 mode=status _async_dir=/root/.ansible_async
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.2977] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.3007] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.5189] audit: op="networking-control" arg="global-dns-configuration" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.5220] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.5260] audit: op="networking-control" arg="global-dns-configuration" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.5294] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.6514] checkpoint[0x55eb2e2bca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb  2 12:16:54 np0005605476 NetworkManager[49022]: <info>  [1770052614.6518] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51793 uid=0 result="success"
Feb  2 12:16:54 np0005605476 ansible-async_wrapper.py[51791]: Module complete (51791)
Feb  2 12:16:55 np0005605476 ansible-async_wrapper.py[51790]: Done in kid B.
Feb  2 12:16:57 np0005605476 python3.9[52256]: ansible-ansible.legacy.async_status Invoked with jid=j868347753021.51787 mode=status _async_dir=/root/.ansible_async
Feb  2 12:16:57 np0005605476 python3.9[52356]: ansible-ansible.legacy.async_status Invoked with jid=j868347753021.51787 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 12:16:58 np0005605476 python3.9[52508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:16:59 np0005605476 python3.9[52631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052618.140497-317-60839224191782/.source.returncode _original_basename=.c4zm53yb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:16:59 np0005605476 python3.9[52783]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:00 np0005605476 python3.9[52906]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052619.2084112-333-19915275666307/.source.cfg _original_basename=.2v9la8ii follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:00 np0005605476 python3.9[53058]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:17:00 np0005605476 systemd[1]: Reloading Network Manager...
Feb  2 12:17:00 np0005605476 NetworkManager[49022]: <info>  [1770052620.8837] audit: op="reload" arg="0" pid=53063 uid=0 result="success"
Feb  2 12:17:00 np0005605476 NetworkManager[49022]: <info>  [1770052620.8841] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb  2 12:17:00 np0005605476 systemd[1]: Reloaded Network Manager.
Feb  2 12:17:01 np0005605476 systemd[1]: session-9.scope: Deactivated successfully.
Feb  2 12:17:01 np0005605476 systemd[1]: session-9.scope: Consumed 44.007s CPU time.
Feb  2 12:17:01 np0005605476 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Feb  2 12:17:01 np0005605476 systemd-logind[799]: Removed session 9.
Feb  2 12:17:04 np0005605476 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 12:17:06 np0005605476 systemd-logind[799]: New session 10 of user zuul.
Feb  2 12:17:06 np0005605476 systemd[1]: Started Session 10 of User zuul.
Feb  2 12:17:07 np0005605476 python3.9[53249]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:17:08 np0005605476 python3.9[53403]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:17:09 np0005605476 python3.9[53596]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:17:10 np0005605476 systemd[1]: session-10.scope: Deactivated successfully.
Feb  2 12:17:10 np0005605476 systemd[1]: session-10.scope: Consumed 2.231s CPU time.
Feb  2 12:17:10 np0005605476 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Feb  2 12:17:10 np0005605476 systemd-logind[799]: Removed session 10.
Feb  2 12:17:10 np0005605476 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 12:17:15 np0005605476 systemd-logind[799]: New session 11 of user zuul.
Feb  2 12:17:15 np0005605476 systemd[1]: Started Session 11 of User zuul.
Feb  2 12:17:16 np0005605476 python3.9[53778]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:17:17 np0005605476 python3.9[53933]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:17:18 np0005605476 python3.9[54089]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:17:19 np0005605476 python3.9[54173]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:17:20 np0005605476 python3.9[54327]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:17:21 np0005605476 python3.9[54522]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:22 np0005605476 python3.9[54674]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:17:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-compat608737840-merged.mount: Deactivated successfully.
Feb  2 12:17:22 np0005605476 podman[54675]: 2026-02-02 17:17:22.72773675 +0000 UTC m=+0.054621307 system refresh
Feb  2 12:17:23 np0005605476 python3.9[54838]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:17:23 np0005605476 python3.9[54961]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052642.8887496-74-102721971110617/.source.json follow=False _original_basename=podman_network_config.j2 checksum=6e62a2d50a29a46ba6c2011d3c4c14aff5a98288 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:25 np0005605476 python3.9[55113]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:25 np0005605476 python3.9[55236]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770052644.8752952-89-216083681604428/.source.conf follow=False _original_basename=registries.conf.j2 checksum=afa1df2f20df99cadae6785e2dec481dcc7ded84 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:17:26 np0005605476 python3.9[55388]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:17:26 np0005605476 python3.9[55540]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:17:27 np0005605476 python3.9[55692]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:17:27 np0005605476 python3.9[55844]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:17:28 np0005605476 python3.9[55996]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:17:30 np0005605476 python3.9[56149]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:17:31 np0005605476 python3.9[56303]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:17:32 np0005605476 python3.9[56455]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:17:32 np0005605476 python3.9[56607]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:17:33 np0005605476 python3.9[56760]: ansible-service_facts Invoked
Feb  2 12:17:33 np0005605476 network[56777]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:17:33 np0005605476 network[56778]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:17:33 np0005605476 network[56779]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:17:37 np0005605476 python3.9[57231]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:17:39 np0005605476 python3.9[57384]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  2 12:17:40 np0005605476 python3.9[57536]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:41 np0005605476 python3.9[57661]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052660.40465-233-158608403120450/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:42 np0005605476 python3.9[57815]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:42 np0005605476 python3.9[57940]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052661.6526487-248-121654261358799/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:43 np0005605476 python3.9[58094]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:44 np0005605476 python3.9[58248]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:17:45 np0005605476 python3.9[58332]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:17:46 np0005605476 python3.9[58486]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:17:47 np0005605476 python3.9[58570]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:17:47 np0005605476 systemd[1]: Stopping NTP client/server...
Feb  2 12:17:47 np0005605476 chronyd[789]: chronyd exiting
Feb  2 12:17:47 np0005605476 systemd[1]: chronyd.service: Deactivated successfully.
Feb  2 12:17:47 np0005605476 systemd[1]: Stopped NTP client/server.
Feb  2 12:17:47 np0005605476 systemd[1]: Starting NTP client/server...
Feb  2 12:17:47 np0005605476 chronyd[58579]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 12:17:47 np0005605476 chronyd[58579]: Frequency -28.210 +/- 0.279 ppm read from /var/lib/chrony/drift
Feb  2 12:17:47 np0005605476 chronyd[58579]: Loaded seccomp filter (level 2)
Feb  2 12:17:47 np0005605476 systemd[1]: Started NTP client/server.
Feb  2 12:17:47 np0005605476 systemd[1]: session-11.scope: Deactivated successfully.
Feb  2 12:17:47 np0005605476 systemd[1]: session-11.scope: Consumed 21.750s CPU time.
Feb  2 12:17:47 np0005605476 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Feb  2 12:17:47 np0005605476 systemd-logind[799]: Removed session 11.
Feb  2 12:17:53 np0005605476 systemd-logind[799]: New session 12 of user zuul.
Feb  2 12:17:53 np0005605476 systemd[1]: Started Session 12 of User zuul.
Feb  2 12:17:53 np0005605476 python3.9[58760]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:54 np0005605476 python3.9[58912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:17:55 np0005605476 python3.9[59035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052674.1179223-29-52453988998996/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:17:55 np0005605476 systemd[1]: session-12.scope: Deactivated successfully.
Feb  2 12:17:55 np0005605476 systemd[1]: session-12.scope: Consumed 1.285s CPU time.
Feb  2 12:17:55 np0005605476 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Feb  2 12:17:55 np0005605476 systemd-logind[799]: Removed session 12.
Feb  2 12:18:01 np0005605476 systemd-logind[799]: New session 13 of user zuul.
Feb  2 12:18:01 np0005605476 systemd[1]: Started Session 13 of User zuul.
Feb  2 12:18:02 np0005605476 python3.9[59213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:18:03 np0005605476 python3.9[59369]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:03 np0005605476 python3.9[59544]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:04 np0005605476 python3.9[59667]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1770052683.2939963-36-53685764774486/.source.json _original_basename=.twfvrovp follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:05 np0005605476 python3.9[59819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:05 np0005605476 python3.9[59942]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052684.9632034-59-63082909418694/.source _original_basename=.0w67mq32 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:06 np0005605476 python3.9[60094]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:18:06 np0005605476 python3.9[60246]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:07 np0005605476 python3.9[60369]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770052686.499993-83-137085580780603/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:18:07 np0005605476 python3.9[60521]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:08 np0005605476 python3.9[60644]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770052687.5347981-83-193586448877579/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:18:08 np0005605476 python3.9[60796]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:09 np0005605476 python3.9[60948]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:10 np0005605476 python3.9[61071]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052689.1509507-120-98012923619923/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:10 np0005605476 python3.9[61223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:10 np0005605476 python3.9[61346]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052690.170721-135-213600594926939/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:11 np0005605476 python3.9[61498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:18:12 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:12 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:12 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:12 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:12 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:12 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:12 np0005605476 systemd[1]: Starting EDPM Container Shutdown...
Feb  2 12:18:12 np0005605476 systemd[1]: Finished EDPM Container Shutdown.
Feb  2 12:18:13 np0005605476 python3.9[61724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:13 np0005605476 python3.9[61847]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052692.6103902-158-64218966713361/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:14 np0005605476 python3.9[61999]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:14 np0005605476 python3.9[62122]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052693.7868547-173-188303417934770/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:15 np0005605476 python3.9[62274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:18:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:15 np0005605476 systemd[1]: Starting Create netns directory...
Feb  2 12:18:15 np0005605476 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 12:18:15 np0005605476 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 12:18:15 np0005605476 systemd[1]: Finished Create netns directory.
Feb  2 12:18:16 np0005605476 python3.9[62500]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:18:16 np0005605476 network[62517]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:18:16 np0005605476 network[62518]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:18:16 np0005605476 network[62519]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:18:19 np0005605476 python3.9[62781]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:18:19 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:19 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:19 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:19 np0005605476 systemd[1]: Stopping IPv4 firewall with iptables...
Feb  2 12:18:19 np0005605476 iptables.init[62820]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb  2 12:18:19 np0005605476 iptables.init[62820]: iptables: Flushing firewall rules: [  OK  ]
Feb  2 12:18:19 np0005605476 systemd[1]: iptables.service: Deactivated successfully.
Feb  2 12:18:19 np0005605476 systemd[1]: Stopped IPv4 firewall with iptables.
Feb  2 12:18:20 np0005605476 python3.9[63016]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:18:21 np0005605476 python3.9[63170]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:18:21 np0005605476 systemd[1]: Reloading.
Feb  2 12:18:21 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:18:21 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:18:21 np0005605476 systemd[1]: Starting Netfilter Tables...
Feb  2 12:18:21 np0005605476 systemd[1]: Finished Netfilter Tables.
Feb  2 12:18:22 np0005605476 python3.9[63362]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:18:23 np0005605476 python3.9[63515]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:23 np0005605476 python3.9[63640]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052702.7324562-242-38643527230817/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:24 np0005605476 python3.9[63793]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:18:24 np0005605476 systemd[1]: Reloading OpenSSH server daemon...
Feb  2 12:18:24 np0005605476 systemd[1]: Reloaded OpenSSH server daemon.
Feb  2 12:18:24 np0005605476 python3.9[63950]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:25 np0005605476 python3.9[64102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:25 np0005605476 python3.9[64225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052705.0881689-273-244134552611814/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:26 np0005605476 python3.9[64377]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 12:18:26 np0005605476 systemd[1]: Starting Time & Date Service...
Feb  2 12:18:26 np0005605476 systemd[1]: Started Time & Date Service.
Feb  2 12:18:27 np0005605476 python3.9[64533]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:28 np0005605476 python3.9[64685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:28 np0005605476 python3.9[64808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052707.6152203-308-261517276060400/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:29 np0005605476 python3.9[64960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:29 np0005605476 python3.9[65083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770052708.6635065-323-215363672833548/.source.yaml _original_basename=.z8tdqfgm follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:30 np0005605476 python3.9[65235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:30 np0005605476 python3.9[65358]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052709.6286967-338-159279419255697/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:31 np0005605476 python3.9[65510]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:18:31 np0005605476 python3.9[65663]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:18:32 np0005605476 python3[65816]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 12:18:32 np0005605476 python3.9[65968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:33 np0005605476 python3.9[66091]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052712.3659573-377-253775243973755/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:33 np0005605476 python3.9[66243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:34 np0005605476 python3.9[66366]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052713.413514-392-178086797495345/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:35 np0005605476 python3.9[66518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:35 np0005605476 python3.9[66641]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052714.5351636-407-38354242149085/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:36 np0005605476 python3.9[66793]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:36 np0005605476 python3.9[66916]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052715.6748443-422-150828909215836/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:37 np0005605476 python3.9[67068]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:18:37 np0005605476 python3.9[67191]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770052716.727464-437-186265420089596/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:38 np0005605476 python3.9[67343]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:38 np0005605476 python3.9[67495]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:18:39 np0005605476 python3.9[67654]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:40 np0005605476 python3.9[67807]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:40 np0005605476 python3.9[67959]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:41 np0005605476 python3.9[68111]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 12:18:41 np0005605476 python3.9[68264]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 12:18:42 np0005605476 systemd[1]: session-13.scope: Deactivated successfully.
Feb  2 12:18:42 np0005605476 systemd[1]: session-13.scope: Consumed 28.873s CPU time.
Feb  2 12:18:42 np0005605476 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Feb  2 12:18:42 np0005605476 systemd-logind[799]: Removed session 13.
Feb  2 12:18:47 np0005605476 systemd-logind[799]: New session 14 of user zuul.
Feb  2 12:18:47 np0005605476 systemd[1]: Started Session 14 of User zuul.
Feb  2 12:18:48 np0005605476 python3.9[68445]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 12:18:49 np0005605476 python3.9[68597]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:18:50 np0005605476 python3.9[68749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:18:51 np0005605476 python3.9[68901]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ6SKSWlzfPU7f7RjN8CFlU375FDDWhb5oRWZrAT4j1px0qJtS9EUoEsYN3Svj45HwgIj7T4L2iiV4fqCeTgFZPq/4EMyOiuIcb6mPFRhO5rV8GFKR83vwwdSnltqS+Wh83m6FsFc38evlSQHewlszztQW5H3sJH8XzOYPvSSAbpwGfukhBmr4nL9btc77XALuIi4XdgZprbGHwAg9IsqqROASIaJ7KZ7Aizr7aOJPuvetUYoHBykOQ4ka4Y8nPexVqjyguk8Pszdv+VNX+6/UEEM2DLGmfuNElBpHOLwRHdXra75FcC3zj4MOyWyvK4HvoiKK9rw0lzyZvlQZK/qeAefgDaAkaJXSjdUDjst9yuKFEcwC9YlIveLG7jq9sPfgSGJwVTBiVoCxNC+QHpbYs6SP+xnDeOndwkBraidIR8ruBZKu+ywEaVpjYoGrastkBD0CL6VfGw9sNHsWDjrw7Cbg6kuuzjTSP+VCj2oOWZv9ZofzACpCGftzfZggo+s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPKrsA58m69x7APjvzXvaVbYTk7XdsFY3HNzsBZWPxir#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCo16m/qvXepjRYVF6qP7nMQdK8bChxoaiXB4sppkC0pGQbaJTq3OB+7vpaqEYym/PNGusm1gpPqmortJLj1DbU=#012 create=True mode=0644 path=/tmp/ansible.kce_xr9k state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:51 np0005605476 python3.9[69053]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kce_xr9k' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:18:52 np0005605476 python3.9[69207]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kce_xr9k state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:18:52 np0005605476 systemd[1]: session-14.scope: Deactivated successfully.
Feb  2 12:18:52 np0005605476 systemd[1]: session-14.scope: Consumed 2.981s CPU time.
Feb  2 12:18:52 np0005605476 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Feb  2 12:18:52 np0005605476 systemd-logind[799]: Removed session 14.
Feb  2 12:18:56 np0005605476 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 12:18:58 np0005605476 systemd-logind[799]: New session 15 of user zuul.
Feb  2 12:18:58 np0005605476 systemd[1]: Started Session 15 of User zuul.
Feb  2 12:18:59 np0005605476 python3.9[69387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:19:00 np0005605476 python3.9[69543]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 12:19:00 np0005605476 python3.9[69697]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:19:01 np0005605476 python3.9[69850]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:02 np0005605476 python3.9[70003]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:19:03 np0005605476 python3.9[70157]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:03 np0005605476 python3.9[70312]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:04 np0005605476 systemd[1]: session-15.scope: Deactivated successfully.
Feb  2 12:19:04 np0005605476 systemd[1]: session-15.scope: Consumed 3.756s CPU time.
Feb  2 12:19:04 np0005605476 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Feb  2 12:19:04 np0005605476 systemd-logind[799]: Removed session 15.
Feb  2 12:19:09 np0005605476 systemd-logind[799]: New session 16 of user zuul.
Feb  2 12:19:09 np0005605476 systemd[1]: Started Session 16 of User zuul.
Feb  2 12:19:09 np0005605476 python3.9[70490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:19:10 np0005605476 python3.9[70646]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:19:11 np0005605476 python3.9[70730]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 12:19:13 np0005605476 python3.9[70881]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:15 np0005605476 python3.9[71032]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:19:15 np0005605476 python3.9[71182]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:19:15 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:19:16 np0005605476 python3.9[71333]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:19:16 np0005605476 systemd[1]: session-16.scope: Deactivated successfully.
Feb  2 12:19:16 np0005605476 systemd[1]: session-16.scope: Consumed 5.339s CPU time.
Feb  2 12:19:16 np0005605476 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Feb  2 12:19:16 np0005605476 systemd-logind[799]: Removed session 16.
Feb  2 12:19:24 np0005605476 systemd-logind[799]: New session 17 of user zuul.
Feb  2 12:19:24 np0005605476 systemd[1]: Started Session 17 of User zuul.
Feb  2 12:19:29 np0005605476 python3[72100]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:19:30 np0005605476 python3[72196]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 12:19:32 np0005605476 python3[72223]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:32 np0005605476 python3[72249]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:32 np0005605476 kernel: loop: module loaded
Feb  2 12:19:32 np0005605476 kernel: loop3: detected capacity change from 0 to 41943040
Feb  2 12:19:33 np0005605476 python3[72284]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:33 np0005605476 lvm[72287]: PV /dev/loop3 not used.
Feb  2 12:19:33 np0005605476 lvm[72296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:19:33 np0005605476 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb  2 12:19:33 np0005605476 lvm[72298]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb  2 12:19:33 np0005605476 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb  2 12:19:33 np0005605476 python3[72376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:19:33 np0005605476 python3[72449]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052773.3932898-36306-45005457498425/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:34 np0005605476 python3[72499]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:19:34 np0005605476 systemd[1]: Reloading.
Feb  2 12:19:34 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:19:34 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:19:34 np0005605476 systemd[1]: Starting Ceph OSD losetup...
Feb  2 12:19:34 np0005605476 bash[72539]: /dev/loop3: [64513]:4329560 (/var/lib/ceph-osd-0.img)
Feb  2 12:19:34 np0005605476 systemd[1]: Finished Ceph OSD losetup.
Feb  2 12:19:34 np0005605476 lvm[72540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:19:34 np0005605476 lvm[72540]: VG ceph_vg0 finished
Feb  2 12:19:35 np0005605476 python3[72566]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 12:19:36 np0005605476 python3[72593]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:37 np0005605476 python3[72619]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:37 np0005605476 kernel: loop4: detected capacity change from 0 to 41943040
Feb  2 12:19:37 np0005605476 python3[72651]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:37 np0005605476 lvm[72654]: PV /dev/loop4 not used.
Feb  2 12:19:37 np0005605476 lvm[72656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:19:37 np0005605476 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb  2 12:19:37 np0005605476 lvm[72667]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:19:37 np0005605476 lvm[72667]: VG ceph_vg1 finished
Feb  2 12:19:37 np0005605476 lvm[72665]:  1 logical volume(s) in volume group "ceph_vg1" now active
Feb  2 12:19:37 np0005605476 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb  2 12:19:37 np0005605476 python3[72745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:19:38 np0005605476 python3[72818]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052777.7267458-36333-22816603504389/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:38 np0005605476 python3[72868]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:19:38 np0005605476 systemd[1]: Reloading.
Feb  2 12:19:38 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:19:38 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:19:38 np0005605476 systemd[1]: Starting Ceph OSD losetup...
Feb  2 12:19:38 np0005605476 bash[72909]: /dev/loop4: [64513]:4599419 (/var/lib/ceph-osd-1.img)
Feb  2 12:19:39 np0005605476 systemd[1]: Finished Ceph OSD losetup.
Feb  2 12:19:39 np0005605476 lvm[72910]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:19:39 np0005605476 lvm[72910]: VG ceph_vg1 finished
Feb  2 12:19:39 np0005605476 python3[72936]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 12:19:40 np0005605476 python3[72963]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:41 np0005605476 python3[72989]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:41 np0005605476 kernel: loop5: detected capacity change from 0 to 41943040
Feb  2 12:19:41 np0005605476 python3[73021]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:41 np0005605476 lvm[73024]: PV /dev/loop5 not used.
Feb  2 12:19:41 np0005605476 lvm[73026]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:19:41 np0005605476 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb  2 12:19:41 np0005605476 lvm[73032]:  1 logical volume(s) in volume group "ceph_vg2" now active
Feb  2 12:19:41 np0005605476 lvm[73037]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:19:41 np0005605476 lvm[73037]: VG ceph_vg2 finished
Feb  2 12:19:41 np0005605476 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb  2 12:19:41 np0005605476 python3[73115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:19:42 np0005605476 python3[73188]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052781.7075236-36360-105219835360429/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:42 np0005605476 python3[73238]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:19:42 np0005605476 systemd[1]: Reloading.
Feb  2 12:19:42 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:19:42 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:19:42 np0005605476 systemd[1]: Starting Ceph OSD losetup...
Feb  2 12:19:42 np0005605476 bash[73277]: /dev/loop5: [64513]:4642264 (/var/lib/ceph-osd-2.img)
Feb  2 12:19:42 np0005605476 systemd[1]: Finished Ceph OSD losetup.
Feb  2 12:19:42 np0005605476 lvm[73278]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:19:42 np0005605476 lvm[73278]: VG ceph_vg2 finished
Feb  2 12:19:44 np0005605476 python3[73302]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:19:46 np0005605476 python3[73395]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 12:19:48 np0005605476 python3[73452]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 12:19:51 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:19:51 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:19:52 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:19:52 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:19:52 np0005605476 systemd[1]: run-r978e6aadb286432a8711ded7b83a24c6.service: Deactivated successfully.
Feb  2 12:19:52 np0005605476 python3[73571]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:52 np0005605476 python3[73599]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:19:53 np0005605476 python3[73639]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:53 np0005605476 python3[73665]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:54 np0005605476 python3[73743]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:19:54 np0005605476 python3[73816]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052793.934217-36508-127673466094816/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:55 np0005605476 python3[73918]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:19:55 np0005605476 python3[73991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052794.8756254-36526-20217209569041/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:19:55 np0005605476 python3[74041]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:56 np0005605476 python3[74069]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:56 np0005605476 python3[74097]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:56 np0005605476 python3[74123]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:19:56 np0005605476 python3[74149]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid eb48d0ef-3496-563c-b73d-661fb962013e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:19:57 np0005605476 systemd-logind[799]: New session 18 of user ceph-admin.
Feb  2 12:19:57 np0005605476 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 12:19:57 np0005605476 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 12:19:57 np0005605476 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 12:19:57 np0005605476 systemd[1]: Starting User Manager for UID 42477...
Feb  2 12:19:57 np0005605476 systemd[74157]: Queued start job for default target Main User Target.
Feb  2 12:19:57 np0005605476 systemd[74157]: Created slice User Application Slice.
Feb  2 12:19:57 np0005605476 systemd[74157]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 12:19:57 np0005605476 systemd[74157]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 12:19:57 np0005605476 systemd[74157]: Reached target Paths.
Feb  2 12:19:57 np0005605476 systemd[74157]: Reached target Timers.
Feb  2 12:19:57 np0005605476 systemd[74157]: Starting D-Bus User Message Bus Socket...
Feb  2 12:19:57 np0005605476 systemd[74157]: Starting Create User's Volatile Files and Directories...
Feb  2 12:19:57 np0005605476 systemd[74157]: Finished Create User's Volatile Files and Directories.
Feb  2 12:19:57 np0005605476 systemd[74157]: Listening on D-Bus User Message Bus Socket.
Feb  2 12:19:57 np0005605476 systemd[74157]: Reached target Sockets.
Feb  2 12:19:57 np0005605476 systemd[74157]: Reached target Basic System.
Feb  2 12:19:57 np0005605476 systemd[74157]: Reached target Main User Target.
Feb  2 12:19:57 np0005605476 systemd[74157]: Startup finished in 112ms.
Feb  2 12:19:57 np0005605476 systemd[1]: Started User Manager for UID 42477.
Feb  2 12:19:57 np0005605476 systemd[1]: Started Session 18 of User ceph-admin.
Feb  2 12:19:57 np0005605476 systemd[1]: session-18.scope: Deactivated successfully.
Feb  2 12:19:57 np0005605476 systemd-logind[799]: Session 18 logged out. Waiting for processes to exit.
Feb  2 12:19:57 np0005605476 systemd-logind[799]: Removed session 18.
Feb  2 12:19:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:19:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:19:57 np0005605476 chronyd[58579]: Selected source 51.222.111.13 (pool.ntp.org)
Feb  2 12:20:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-compat355103601-lower\x2dmapped.mount: Deactivated successfully.
Feb  2 12:20:07 np0005605476 systemd[1]: Stopping User Manager for UID 42477...
Feb  2 12:20:07 np0005605476 systemd[74157]: Activating special unit Exit the Session...
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped target Main User Target.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped target Basic System.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped target Paths.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped target Sockets.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped target Timers.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 12:20:07 np0005605476 systemd[74157]: Closed D-Bus User Message Bus Socket.
Feb  2 12:20:07 np0005605476 systemd[74157]: Stopped Create User's Volatile Files and Directories.
Feb  2 12:20:07 np0005605476 systemd[74157]: Removed slice User Application Slice.
Feb  2 12:20:07 np0005605476 systemd[74157]: Reached target Shutdown.
Feb  2 12:20:07 np0005605476 systemd[74157]: Finished Exit the Session.
Feb  2 12:20:07 np0005605476 systemd[74157]: Reached target Exit the Session.
Feb  2 12:20:07 np0005605476 systemd[1]: user@42477.service: Deactivated successfully.
Feb  2 12:20:07 np0005605476 systemd[1]: Stopped User Manager for UID 42477.
Feb  2 12:20:07 np0005605476 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb  2 12:20:07 np0005605476 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb  2 12:20:07 np0005605476 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb  2 12:20:07 np0005605476 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb  2 12:20:07 np0005605476 systemd[1]: Removed slice User Slice of UID 42477.
Feb  2 12:20:13 np0005605476 podman[74251]: 2026-02-02 17:20:13.803841963 +0000 UTC m=+16.007064167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:13 np0005605476 podman[74327]: 2026-02-02 17:20:13.87817027 +0000 UTC m=+0.050355972 container create 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:20:13 np0005605476 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb  2 12:20:13 np0005605476 systemd[1]: Started libpod-conmon-0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641.scope.
Feb  2 12:20:13 np0005605476 podman[74327]: 2026-02-02 17:20:13.852845215 +0000 UTC m=+0.025030977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:13 np0005605476 podman[74327]: 2026-02-02 17:20:13.998278758 +0000 UTC m=+0.170464480 container init 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:14 np0005605476 podman[74327]: 2026-02-02 17:20:14.005489031 +0000 UTC m=+0.177674723 container start 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:14 np0005605476 podman[74327]: 2026-02-02 17:20:14.009262198 +0000 UTC m=+0.181447890 container attach 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:14 np0005605476 xenodochial_hawking[74344]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74327]: 2026-02-02 17:20:14.111382338 +0000 UTC m=+0.283568020 container died 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:20:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-46b79f0df1a32e4a84b5e3a410af5f70053c34452801e6e2c6a384fd6eaf2f84-merged.mount: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74327]: 2026-02-02 17:20:14.149828693 +0000 UTC m=+0.322014375 container remove 0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641 (image=quay.io/ceph/ceph:v20, name=xenodochial_hawking, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-conmon-0f48a3ee102db8bbb74fc9ef83809b2def0a5ad3540a7acb2904ce3e51466641.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.216193905 +0000 UTC m=+0.044825576 container create bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:20:14 np0005605476 systemd[1]: Started libpod-conmon-bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95.scope.
Feb  2 12:20:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.274592772 +0000 UTC m=+0.103224463 container init bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.279363167 +0000 UTC m=+0.107994858 container start bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.282793264 +0000 UTC m=+0.111424955 container attach bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:20:14 np0005605476 trusting_swanson[74377]: 167 167
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.28408607 +0000 UTC m=+0.112717741 container died bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.196776627 +0000 UTC m=+0.025408348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:14 np0005605476 podman[74361]: 2026-02-02 17:20:14.319340265 +0000 UTC m=+0.147971936 container remove bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95 (image=quay.io/ceph/ceph:v20, name=trusting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-conmon-bb866ad5f42a63f2fd69a4ad1fff8133026a8288e5e874935ec2f92a860aae95.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.371334721 +0000 UTC m=+0.036997394 container create b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:14 np0005605476 systemd[1]: Started libpod-conmon-b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427.scope.
Feb  2 12:20:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.415102126 +0000 UTC m=+0.080764799 container init b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.419197191 +0000 UTC m=+0.084859854 container start b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.422111534 +0000 UTC m=+0.087774197 container attach b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:14 np0005605476 charming_bhabha[74410]: AQDO3IBp8uEeGhAApihHwK6F2DMG0tchuLfGMg==
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.441231663 +0000 UTC m=+0.106894326 container died b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.355075163 +0000 UTC m=+0.020737846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:14 np0005605476 podman[74394]: 2026-02-02 17:20:14.472404422 +0000 UTC m=+0.138067115 container remove b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427 (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-conmon-b2ad47016c616c9804a4499a517f92dbed79fdaefad36cd69d0f08ed6fac2427.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.550543726 +0000 UTC m=+0.059354135 container create 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:20:14 np0005605476 systemd[1]: Started libpod-conmon-922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78.scope.
Feb  2 12:20:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.611524327 +0000 UTC m=+0.120334766 container init 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.616199709 +0000 UTC m=+0.125010078 container start 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.525983594 +0000 UTC m=+0.034794053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.621271542 +0000 UTC m=+0.130081931 container attach 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:14 np0005605476 practical_margulis[74446]: AQDO3IBpNgyyJRAA+FUrK5eAzrpP0CS3UBwluA==
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.634803273 +0000 UTC m=+0.143613652 container died 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:14 np0005605476 podman[74429]: 2026-02-02 17:20:14.67297272 +0000 UTC m=+0.181783089 container remove 922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78 (image=quay.io/ceph/ceph:v20, name=practical_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-conmon-922aff9589bddfac2200a167fcdf09e3ef0cacee8767b98296d68f1dc9cf4d78.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.72579073 +0000 UTC m=+0.035937615 container create 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb  2 12:20:14 np0005605476 systemd[1]: Started libpod-conmon-07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b.scope.
Feb  2 12:20:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.793750517 +0000 UTC m=+0.103897442 container init 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.799564881 +0000 UTC m=+0.109711756 container start 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.71017746 +0000 UTC m=+0.020324375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.807012281 +0000 UTC m=+0.117159216 container attach 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:20:14 np0005605476 vibrant_hamilton[74481]: AQDO3IBp16/BMBAAa1D3j3UGOM2BubWV4dwfHQ==
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.82042573 +0000 UTC m=+0.130572705 container died 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e318cf29d0414b3a58b16106654d8c279b7a1836b73e3481303b1c10b2a74d7e-merged.mount: Deactivated successfully.
Feb  2 12:20:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74465]: 2026-02-02 17:20:14.856864177 +0000 UTC m=+0.167011082 container remove 07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b (image=quay.io/ceph/ceph:v20, name=vibrant_hamilton, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:20:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:14 np0005605476 systemd[1]: libpod-conmon-07df59e9800692caf36f920d93c4f5ecdcbb88b731c231167be3d2d82d152a6b.scope: Deactivated successfully.
Feb  2 12:20:14 np0005605476 podman[74498]: 2026-02-02 17:20:14.927770468 +0000 UTC m=+0.045778013 container create 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:20:14 np0005605476 systemd[1]: Started libpod-conmon-723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd.scope.
Feb  2 12:20:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cfa30934c318bbe99732f36ea5f22790639b15593e2040c29a6ae5d57e68c0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:14.908950927 +0000 UTC m=+0.026958452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:15.005738797 +0000 UTC m=+0.123746402 container init 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:15.011905131 +0000 UTC m=+0.129912676 container start 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:15.01578248 +0000 UTC m=+0.133790025 container attach 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:15 np0005605476 intelligent_carver[74514]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb  2 12:20:15 np0005605476 intelligent_carver[74514]: setting min_mon_release = tentacle
Feb  2 12:20:15 np0005605476 intelligent_carver[74514]: /usr/bin/monmaptool: set fsid to eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:15 np0005605476 intelligent_carver[74514]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb  2 12:20:15 np0005605476 systemd[1]: libpod-723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd.scope: Deactivated successfully.
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:15.0614968 +0000 UTC m=+0.179504335 container died 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:15 np0005605476 podman[74498]: 2026-02-02 17:20:15.098345079 +0000 UTC m=+0.216352624 container remove 723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd (image=quay.io/ceph/ceph:v20, name=intelligent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:20:15 np0005605476 systemd[1]: libpod-conmon-723b695ae49e3356cd17e66f7abcaffe70355a68bc3a10e15af4cd99af58b1cd.scope: Deactivated successfully.
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.167383137 +0000 UTC m=+0.047980145 container create 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:20:15 np0005605476 systemd[1]: Started libpod-conmon-9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417.scope.
Feb  2 12:20:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d164177d258b6d3560969fe80a3c7d28fa6fd3737d00bedc17df1636ba4e981/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d164177d258b6d3560969fe80a3c7d28fa6fd3737d00bedc17df1636ba4e981/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d164177d258b6d3560969fe80a3c7d28fa6fd3737d00bedc17df1636ba4e981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d164177d258b6d3560969fe80a3c7d28fa6fd3737d00bedc17df1636ba4e981/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.144547783 +0000 UTC m=+0.025144811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.256644515 +0000 UTC m=+0.137241503 container init 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.263290882 +0000 UTC m=+0.143887860 container start 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.266715029 +0000 UTC m=+0.147312037 container attach 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:20:15 np0005605476 systemd[1]: libpod-9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417.scope: Deactivated successfully.
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.358693623 +0000 UTC m=+0.239290601 container died 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:20:15 np0005605476 podman[74533]: 2026-02-02 17:20:15.388614487 +0000 UTC m=+0.269211465 container remove 9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417 (image=quay.io/ceph/ceph:v20, name=agitated_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:15 np0005605476 systemd[1]: libpod-conmon-9a86589f9115bbe11f5f4f1b3eff9830c1dfeee8507ba80ddc4aaa6411d9f417.scope: Deactivated successfully.
Feb  2 12:20:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:15 np0005605476 systemd[1]: Reached target All Ceph clusters and services.
Feb  2 12:20:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:16 np0005605476 systemd[1]: Reached target Ceph cluster eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:16 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:16 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:16 np0005605476 systemd[1]: Created slice Slice /system/ceph-eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:16 np0005605476 systemd[1]: Reached target System Time Set.
Feb  2 12:20:16 np0005605476 systemd[1]: Reached target System Time Synchronized.
Feb  2 12:20:16 np0005605476 systemd[1]: Starting Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:16 np0005605476 podman[74827]: 2026-02-02 17:20:16.748449846 +0000 UTC m=+0.038363413 container create a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6d3d5ee233a342c0d0254077f570a5a4b827544dbc70feab96f08c2e751060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6d3d5ee233a342c0d0254077f570a5a4b827544dbc70feab96f08c2e751060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6d3d5ee233a342c0d0254077f570a5a4b827544dbc70feab96f08c2e751060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6d3d5ee233a342c0d0254077f570a5a4b827544dbc70feab96f08c2e751060/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:16 np0005605476 podman[74827]: 2026-02-02 17:20:16.813040668 +0000 UTC m=+0.102954295 container init a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:20:16 np0005605476 podman[74827]: 2026-02-02 17:20:16.819339626 +0000 UTC m=+0.109253203 container start a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:16 np0005605476 bash[74827]: a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679
Feb  2 12:20:16 np0005605476 podman[74827]: 2026-02-02 17:20:16.731281412 +0000 UTC m=+0.021194999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:16 np0005605476 systemd[1]: Started Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: pidfile_write: ignore empty --pid-file
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: load: jerasure load: lrc 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Git sha 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: DB SUMMARY
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: DB Session ID:  GD9Q6Q1XOHDFS4N0THM8
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                                     Options.env: 0x55c1b4c57440
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                                Options.info_log: 0x55c1b5b0f3e0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                                 Options.wal_dir: 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                    Options.write_buffer_manager: 0x55c1b5a8e140
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                               Options.row_cache: None
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                              Options.wal_filter: None
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.wal_compression: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.max_background_jobs: 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Compression algorithms supported:
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kZSTD supported: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:           Options.merge_operator: 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:        Options.compaction_filter: None
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c1b5a9a700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c1b5a7f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.compression: NoCompression
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.num_levels: 7
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 25cd6f31-be6a-4568-affa-77d2d10d4958
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052816874358, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052816876842, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "GD9Q6Q1XOHDFS4N0THM8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052816876947, "job": 1, "event": "recovery_finished"}
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb  2 12:20:16 np0005605476 podman[74848]: 2026-02-02 17:20:16.886895961 +0000 UTC m=+0.038976920 container create a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c1b5aace00
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: DB pointer 0x55c1b5bf8000
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c1b5a7f8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@-1(???) e0 preinit fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(probing) e0 win_standalone_election
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:16 np0005605476 podman[74848]: 2026-02-02 17:20:16.869371977 +0000 UTC m=+0.021452856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 12:20:16 np0005605476 ceph-mon[74847]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:16 np0005605476 systemd[1]: Started libpod-conmon-a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf.scope.
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T17:20:15.057605+0000
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : created 2026-02-02T17:20:15.057605+0000
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-02T17:20:15.304317Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864288,os=Linux}
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).mds e1 new map
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-02T17:20:17:007731+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mkfs eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 12:20:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f4ac0705325324cf1d32352897014b621a16d51b4d5731a33d78105208698d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f4ac0705325324cf1d32352897014b621a16d51b4d5731a33d78105208698d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f4ac0705325324cf1d32352897014b621a16d51b4d5731a33d78105208698d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 podman[74848]: 2026-02-02 17:20:17.058164173 +0000 UTC m=+0.210245052 container init a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:17 np0005605476 podman[74848]: 2026-02-02 17:20:17.067722252 +0000 UTC m=+0.219803121 container start a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:20:17 np0005605476 podman[74848]: 2026-02-02 17:20:17.071074987 +0000 UTC m=+0.223155866 container attach a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2665639744' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  2 12:20:17 np0005605476 confident_greider[74902]:  cluster:
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    id:     eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    health: HEALTH_OK
Feb  2 12:20:17 np0005605476 confident_greider[74902]: 
Feb  2 12:20:17 np0005605476 confident_greider[74902]:  services:
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    mon: 1 daemons, quorum compute-0 (age 0.26682s) [leader: compute-0]
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    mgr: no daemons active
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    osd: 0 osds: 0 up, 0 in
Feb  2 12:20:17 np0005605476 confident_greider[74902]: 
Feb  2 12:20:17 np0005605476 confident_greider[74902]:  data:
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    pools:   0 pools, 0 pgs
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    objects: 0 objects, 0 B
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    usage:   0 B used, 0 B / 0 B avail
Feb  2 12:20:17 np0005605476 confident_greider[74902]:    pgs:     
Feb  2 12:20:17 np0005605476 confident_greider[74902]: 
Feb  2 12:20:17 np0005605476 systemd[1]: libpod-a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf.scope: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74848]: 2026-02-02 17:20:17.288942413 +0000 UTC m=+0.441023272 container died a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:20:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f6f4ac0705325324cf1d32352897014b621a16d51b4d5731a33d78105208698d-merged.mount: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74848]: 2026-02-02 17:20:17.319874605 +0000 UTC m=+0.471955474 container remove a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf (image=quay.io/ceph/ceph:v20, name=confident_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:17 np0005605476 systemd[1]: libpod-conmon-a5f6d87052a3400bbef420895eb0a2c2fb855f06057057a8c9c526bc5f7d9ccf.scope: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.394303405 +0000 UTC m=+0.053981274 container create 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:20:17 np0005605476 systemd[1]: Started libpod-conmon-274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977.scope.
Feb  2 12:20:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75eb591ed07d3240697bd91a5ba283547cb8325ffe817bd8f664a9b11add40c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75eb591ed07d3240697bd91a5ba283547cb8325ffe817bd8f664a9b11add40c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75eb591ed07d3240697bd91a5ba283547cb8325ffe817bd8f664a9b11add40c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75eb591ed07d3240697bd91a5ba283547cb8325ffe817bd8f664a9b11add40c4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.37075017 +0000 UTC m=+0.030428059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.470527115 +0000 UTC m=+0.130205024 container init 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.477965155 +0000 UTC m=+0.137642984 container start 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.481072252 +0000 UTC m=+0.140750101 container attach 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3918397969' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:20:17 np0005605476 ceph-mon[74847]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3918397969' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 12:20:17 np0005605476 awesome_kalam[74957]: 
Feb  2 12:20:17 np0005605476 awesome_kalam[74957]: [global]
Feb  2 12:20:17 np0005605476 awesome_kalam[74957]: #011fsid = eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:17 np0005605476 awesome_kalam[74957]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 12:20:17 np0005605476 awesome_kalam[74957]: #011osd_crush_chooseleaf_type = 0
Feb  2 12:20:17 np0005605476 systemd[1]: libpod-274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977.scope: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.727689669 +0000 UTC m=+0.387367488 container died 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:20:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-75eb591ed07d3240697bd91a5ba283547cb8325ffe817bd8f664a9b11add40c4-merged.mount: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74940]: 2026-02-02 17:20:17.766272948 +0000 UTC m=+0.425950777 container remove 274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977 (image=quay.io/ceph/ceph:v20, name=awesome_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:20:17 np0005605476 systemd[1]: libpod-conmon-274afa028ea20f133895e59d784523a47761bf5ca7e61bfd060df0c02f258977.scope: Deactivated successfully.
Feb  2 12:20:17 np0005605476 podman[74993]: 2026-02-02 17:20:17.810482375 +0000 UTC m=+0.031774488 container create 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:20:17 np0005605476 systemd[1]: Started libpod-conmon-13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6.scope.
Feb  2 12:20:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06726a90de8e20195a5cb1ed9a9dcac77cb06124bd1bc93e50276fec42dae1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06726a90de8e20195a5cb1ed9a9dcac77cb06124bd1bc93e50276fec42dae1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06726a90de8e20195a5cb1ed9a9dcac77cb06124bd1bc93e50276fec42dae1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06726a90de8e20195a5cb1ed9a9dcac77cb06124bd1bc93e50276fec42dae1f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:17 np0005605476 podman[74993]: 2026-02-02 17:20:17.87839077 +0000 UTC m=+0.099682933 container init 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:20:17 np0005605476 podman[74993]: 2026-02-02 17:20:17.882200938 +0000 UTC m=+0.103493051 container start 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:20:17 np0005605476 podman[74993]: 2026-02-02 17:20:17.88547401 +0000 UTC m=+0.106766123 container attach 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:20:17 np0005605476 podman[74993]: 2026-02-02 17:20:17.796328495 +0000 UTC m=+0.017620628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: from='client.? 192.168.122.100:0/3918397969' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: from='client.? 192.168.122.100:0/3918397969' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249203893' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:18 np0005605476 systemd[1]: libpod-13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6.scope: Deactivated successfully.
Feb  2 12:20:18 np0005605476 conmon[75010]: conmon 13b9fdb2c238c235eac0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6.scope/container/memory.events
Feb  2 12:20:18 np0005605476 podman[74993]: 2026-02-02 17:20:18.078619429 +0000 UTC m=+0.299911542 container died 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d06726a90de8e20195a5cb1ed9a9dcac77cb06124bd1bc93e50276fec42dae1f-merged.mount: Deactivated successfully.
Feb  2 12:20:18 np0005605476 podman[74993]: 2026-02-02 17:20:18.111754753 +0000 UTC m=+0.333046866 container remove 13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6 (image=quay.io/ceph/ceph:v20, name=epic_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:20:18 np0005605476 systemd[1]: libpod-conmon-13b9fdb2c238c235eac002243c75381462bb7065bf4c4fd33bb24c1da51b8ea6.scope: Deactivated successfully.
Feb  2 12:20:18 np0005605476 systemd[1]: Stopping Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: mon.compute-0@0(leader) e1 shutdown
Feb  2 12:20:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0[74843]: 2026-02-02T17:20:18.306+0000 7fc71837c640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 12:20:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0[74843]: 2026-02-02T17:20:18.306+0000 7fc71837c640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 12:20:18 np0005605476 ceph-mon[74847]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 12:20:18 np0005605476 podman[75076]: 2026-02-02 17:20:18.439449387 +0000 UTC m=+0.180145363 container died a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2f6d3d5ee233a342c0d0254077f570a5a4b827544dbc70feab96f08c2e751060-merged.mount: Deactivated successfully.
Feb  2 12:20:18 np0005605476 podman[75076]: 2026-02-02 17:20:18.510024428 +0000 UTC m=+0.250720414 container remove a7ddfae6425e7465a4fed1c13136b5c1ca01cdd864b590df36b49c66c3300679 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:18 np0005605476 bash[75076]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0
Feb  2 12:20:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 12:20:18 np0005605476 systemd[1]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mon.compute-0.service: Deactivated successfully.
Feb  2 12:20:18 np0005605476 systemd[1]: Stopped Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:18 np0005605476 systemd[1]: Starting Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:18 np0005605476 podman[75178]: 2026-02-02 17:20:18.809856966 +0000 UTC m=+0.053640104 container create 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dc665d5b607e11e5261cb0bf495273dbf4c1b133126a1378805e20594ccaad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dc665d5b607e11e5261cb0bf495273dbf4c1b133126a1378805e20594ccaad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dc665d5b607e11e5261cb0bf495273dbf4c1b133126a1378805e20594ccaad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dc665d5b607e11e5261cb0bf495273dbf4c1b133126a1378805e20594ccaad/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:18 np0005605476 podman[75178]: 2026-02-02 17:20:18.787783053 +0000 UTC m=+0.031566211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:18 np0005605476 podman[75178]: 2026-02-02 17:20:18.883098362 +0000 UTC m=+0.126881530 container init 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:20:18 np0005605476 podman[75178]: 2026-02-02 17:20:18.894314808 +0000 UTC m=+0.138097946 container start 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:18 np0005605476 bash[75178]: 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26
Feb  2 12:20:18 np0005605476 systemd[1]: Started Ceph mon.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: pidfile_write: ignore empty --pid-file
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: load: jerasure load: lrc 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Git sha 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: DB SUMMARY
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: DB Session ID:  YVWEYR8NAABFSRFBSKLQ
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                                     Options.env: 0x55f97ef8c440
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                                Options.info_log: 0x55f980529e80
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                                 Options.wal_dir: 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                    Options.write_buffer_manager: 0x55f980574140
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                               Options.row_cache: None
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                              Options.wal_filter: None
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.wal_compression: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.max_background_jobs: 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Compression algorithms supported:
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kZSTD supported: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:           Options.merge_operator: 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:        Options.compaction_filter: None
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f980580a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9805658d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.compression: NoCompression
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.num_levels: 7
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 25cd6f31-be6a-4568-affa-77d2d10d4958
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052818955109, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052818958448, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052818, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052818958583, "job": 1, "event": "recovery_finished"}
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f980592e00
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: DB pointer 0x55f9806dc000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.58 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.58 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 512.00 MB usage: 1.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???) e1 preinit fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).mds e1 new map
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-02T17:20:17:007731+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb  2 12:20:18 np0005605476 podman[75198]: 2026-02-02 17:20:18.975147448 +0000 UTC m=+0.048144239 container create b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : fsid eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T17:20:15.057605+0000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : created 2026-02-02T17:20:15.057605+0000
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 12:20:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 12:20:19 np0005605476 systemd[1]: Started libpod-conmon-b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27.scope.
Feb  2 12:20:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8385d3633fe619ceae904a8b9781bd9062cf7b3e80faafbc303861df57a5e635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8385d3633fe619ceae904a8b9781bd9062cf7b3e80faafbc303861df57a5e635/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8385d3633fe619ceae904a8b9781bd9062cf7b3e80faafbc303861df57a5e635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:19.045637317 +0000 UTC m=+0.118634158 container init b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:19.053096637 +0000 UTC m=+0.126093438 container start b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:18.959790895 +0000 UTC m=+0.032787716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:19 np0005605476 ceph-mon[75197]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:19.069876041 +0000 UTC m=+0.142872932 container attach b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb  2 12:20:19 np0005605476 systemd[1]: libpod-b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27.scope: Deactivated successfully.
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:19.274790231 +0000 UTC m=+0.347787032 container died b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8385d3633fe619ceae904a8b9781bd9062cf7b3e80faafbc303861df57a5e635-merged.mount: Deactivated successfully.
Feb  2 12:20:19 np0005605476 podman[75198]: 2026-02-02 17:20:19.331804769 +0000 UTC m=+0.404801560 container remove b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27 (image=quay.io/ceph/ceph:v20, name=reverent_tesla, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:20:19 np0005605476 systemd[1]: libpod-conmon-b4caec09702e24b90ba019adfcf55a832c4954f30b7c076fece448bcb74f1c27.scope: Deactivated successfully.
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.385083172 +0000 UTC m=+0.036904602 container create f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:19 np0005605476 systemd[1]: Started libpod-conmon-f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83.scope.
Feb  2 12:20:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060406a8900fede95a845d277fc63373c6ef730752af90703696c7c482fd9698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060406a8900fede95a845d277fc63373c6ef730752af90703696c7c482fd9698/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060406a8900fede95a845d277fc63373c6ef730752af90703696c7c482fd9698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.369175053 +0000 UTC m=+0.020996493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.487752068 +0000 UTC m=+0.139573498 container init f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.492651847 +0000 UTC m=+0.144473277 container start f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.497085932 +0000 UTC m=+0.148907352 container attach f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb  2 12:20:19 np0005605476 systemd[1]: libpod-f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83.scope: Deactivated successfully.
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.741321871 +0000 UTC m=+0.393143281 container died f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-060406a8900fede95a845d277fc63373c6ef730752af90703696c7c482fd9698-merged.mount: Deactivated successfully.
Feb  2 12:20:19 np0005605476 podman[75290]: 2026-02-02 17:20:19.797869536 +0000 UTC m=+0.449690966 container remove f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83 (image=quay.io/ceph/ceph:v20, name=optimistic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:20:19 np0005605476 systemd[1]: libpod-conmon-f22b046000511a607f542272b1d68182d09e8c5a66fdb28e90e4721aa02f8f83.scope: Deactivated successfully.
Feb  2 12:20:19 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:19 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:19 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:20 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:20 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:20 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:20 np0005605476 systemd[1]: Starting Ceph mgr.compute-0.hccdnu for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:20 np0005605476 podman[75473]: 2026-02-02 17:20:20.673935068 +0000 UTC m=+0.061560187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:21 np0005605476 podman[75473]: 2026-02-02 17:20:21.187624429 +0000 UTC m=+0.575249478 container create f51dea2484a885b1ef464f71470d5eb130f74e7dd5065bf2b6342dc1451e6a4c (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712b4fe75eacb05bd462801e6213002997eeb8625e3509a97a7e905c4d528a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712b4fe75eacb05bd462801e6213002997eeb8625e3509a97a7e905c4d528a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712b4fe75eacb05bd462801e6213002997eeb8625e3509a97a7e905c4d528a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712b4fe75eacb05bd462801e6213002997eeb8625e3509a97a7e905c4d528a8b/merged/var/lib/ceph/mgr/ceph-compute-0.hccdnu supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 podman[75473]: 2026-02-02 17:20:21.260957718 +0000 UTC m=+0.648582757 container init f51dea2484a885b1ef464f71470d5eb130f74e7dd5065bf2b6342dc1451e6a4c (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:20:21 np0005605476 podman[75473]: 2026-02-02 17:20:21.267404709 +0000 UTC m=+0.655029728 container start f51dea2484a885b1ef464f71470d5eb130f74e7dd5065bf2b6342dc1451e6a4c (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:21 np0005605476 bash[75473]: f51dea2484a885b1ef464f71470d5eb130f74e7dd5065bf2b6342dc1451e6a4c
Feb  2 12:20:21 np0005605476 systemd[1]: Started Ceph mgr.compute-0.hccdnu for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: pidfile_write: ignore empty --pid-file
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'alerts'
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.347444717 +0000 UTC m=+0.040961836 container create c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:20:21 np0005605476 systemd[1]: Started libpod-conmon-c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8.scope.
Feb  2 12:20:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c032f46cbeece462e07fd4e1d9af0cf987330e2be57d050e52dd14463c1581e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c032f46cbeece462e07fd4e1d9af0cf987330e2be57d050e52dd14463c1581e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c032f46cbeece462e07fd4e1d9af0cf987330e2be57d050e52dd14463c1581e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.331982441 +0000 UTC m=+0.025499570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.431650503 +0000 UTC m=+0.125167622 container init c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'balancer'
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.441409518 +0000 UTC m=+0.134926647 container start c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.44572961 +0000 UTC m=+0.139246729 container attach c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:21 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'cephadm'
Feb  2 12:20:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 12:20:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628682205' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 12:20:21 np0005605476 serene_moore[75531]: 
Feb  2 12:20:21 np0005605476 serene_moore[75531]: {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "health": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "status": "HEALTH_OK",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "checks": {},
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "mutes": []
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "election_epoch": 5,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "quorum": [
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        0
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    ],
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "quorum_names": [
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "compute-0"
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    ],
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "quorum_age": 2,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "monmap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "epoch": 1,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "min_mon_release_name": "tentacle",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_mons": 1
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "osdmap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "epoch": 1,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_osds": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_up_osds": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "osd_up_since": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_in_osds": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "osd_in_since": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_remapped_pgs": 0
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "pgmap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "pgs_by_state": [],
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_pgs": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_pools": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_objects": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "data_bytes": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "bytes_used": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "bytes_avail": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "bytes_total": 0
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "fsmap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "epoch": 1,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "btime": "2026-02-02T17:20:17:007731+0000",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "by_rank": [],
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "up:standby": 0
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "mgrmap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "available": false,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "num_standbys": 0,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "modules": [
Feb  2 12:20:21 np0005605476 serene_moore[75531]:            "iostat",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:            "nfs"
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        ],
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "services": {}
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "servicemap": {
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "epoch": 1,
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "modified": "2026-02-02T17:20:17.009633+0000",
Feb  2 12:20:21 np0005605476 serene_moore[75531]:        "services": {}
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    },
Feb  2 12:20:21 np0005605476 serene_moore[75531]:    "progress_events": {}
Feb  2 12:20:21 np0005605476 serene_moore[75531]: }
Feb  2 12:20:21 np0005605476 systemd[1]: libpod-c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8.scope: Deactivated successfully.
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.650037763 +0000 UTC m=+0.343554912 container died c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:20:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c032f46cbeece462e07fd4e1d9af0cf987330e2be57d050e52dd14463c1581e8-merged.mount: Deactivated successfully.
Feb  2 12:20:21 np0005605476 podman[75494]: 2026-02-02 17:20:21.689923848 +0000 UTC m=+0.383440957 container remove c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8 (image=quay.io/ceph/ceph:v20, name=serene_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:20:21 np0005605476 systemd[1]: libpod-conmon-c4e8677dda76fed760f5b658b4e018d25f249d4ddceb282f7eff7327982965c8.scope: Deactivated successfully.
Feb  2 12:20:22 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'crash'
Feb  2 12:20:22 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'dashboard'
Feb  2 12:20:22 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'devicehealth'
Feb  2 12:20:22 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 12:20:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 12:20:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 12:20:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]:  from numpy import show_config as show_numpy_config
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'influx'
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'insights'
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'iostat'
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'k8sevents'
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'localpool'
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 12:20:23 np0005605476 podman[75580]: 2026-02-02 17:20:23.771536717 +0000 UTC m=+0.057720069 container create b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:20:23 np0005605476 systemd[1]: Started libpod-conmon-b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac.scope.
Feb  2 12:20:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf72772c4489d79e6723021331394171b4cf1cfa9ba8bc3116f4f3c3ccf6e589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf72772c4489d79e6723021331394171b4cf1cfa9ba8bc3116f4f3c3ccf6e589/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf72772c4489d79e6723021331394171b4cf1cfa9ba8bc3116f4f3c3ccf6e589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:23 np0005605476 podman[75580]: 2026-02-02 17:20:23.840538744 +0000 UTC m=+0.126722106 container init b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:23 np0005605476 podman[75580]: 2026-02-02 17:20:23.749595568 +0000 UTC m=+0.035779030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:23 np0005605476 podman[75580]: 2026-02-02 17:20:23.844679431 +0000 UTC m=+0.130862783 container start b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:23 np0005605476 podman[75580]: 2026-02-02 17:20:23.84819606 +0000 UTC m=+0.134379472 container attach b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:20:23 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'mirroring'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'nfs'
Feb  2 12:20:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 12:20:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487849729' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]: 
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]: {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "health": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "status": "HEALTH_OK",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "checks": {},
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "mutes": []
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "election_epoch": 5,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "quorum": [
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        0
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    ],
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "quorum_names": [
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "compute-0"
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    ],
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "quorum_age": 5,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "monmap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "epoch": 1,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "min_mon_release_name": "tentacle",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_mons": 1
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "osdmap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "epoch": 1,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_osds": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_up_osds": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "osd_up_since": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_in_osds": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "osd_in_since": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_remapped_pgs": 0
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "pgmap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "pgs_by_state": [],
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_pgs": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_pools": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_objects": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "data_bytes": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "bytes_used": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "bytes_avail": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "bytes_total": 0
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "fsmap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "epoch": 1,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "btime": "2026-02-02T17:20:17:007731+0000",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "by_rank": [],
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "up:standby": 0
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "mgrmap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "available": false,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "num_standbys": 0,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "modules": [
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:            "iostat",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:            "nfs"
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        ],
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "services": {}
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "servicemap": {
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "epoch": 1,
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "modified": "2026-02-02T17:20:17.009633+0000",
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:        "services": {}
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    },
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]:    "progress_events": {}
Feb  2 12:20:24 np0005605476 vibrant_lehmann[75597]: }
Feb  2 12:20:24 np0005605476 systemd[1]: libpod-b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac.scope: Deactivated successfully.
Feb  2 12:20:24 np0005605476 podman[75580]: 2026-02-02 17:20:24.045503876 +0000 UTC m=+0.331687228 container died b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bf72772c4489d79e6723021331394171b4cf1cfa9ba8bc3116f4f3c3ccf6e589-merged.mount: Deactivated successfully.
Feb  2 12:20:24 np0005605476 podman[75580]: 2026-02-02 17:20:24.079494025 +0000 UTC m=+0.365677367 container remove b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac (image=quay.io/ceph/ceph:v20, name=vibrant_lehmann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:20:24 np0005605476 systemd[1]: libpod-conmon-b396f4de73829bc67b9075ab0795af19d1247dfd33314bbf40f0b0a38739f3ac.scope: Deactivated successfully.
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'orchestrator'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'osd_support'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'progress'
Feb  2 12:20:24 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'prometheus'
Feb  2 12:20:25 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rbd_support'
Feb  2 12:20:25 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rgw'
Feb  2 12:20:25 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rook'
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.139850045 +0000 UTC m=+0.040630317 container create 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'selftest'
Feb  2 12:20:26 np0005605476 systemd[1]: Started libpod-conmon-722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec.scope.
Feb  2 12:20:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3becf353950043f9af66b037e470a52750a3a18246b6e17f93b5abd431d3ffca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3becf353950043f9af66b037e470a52750a3a18246b6e17f93b5abd431d3ffca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3becf353950043f9af66b037e470a52750a3a18246b6e17f93b5abd431d3ffca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.121979681 +0000 UTC m=+0.022759973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.224211845 +0000 UTC m=+0.124992137 container init 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.23042794 +0000 UTC m=+0.131208202 container start 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.233308951 +0000 UTC m=+0.134089213 container attach 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'smb'
Feb  2 12:20:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 12:20:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100701487' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]: 
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]: {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "health": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "status": "HEALTH_OK",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "checks": {},
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "mutes": []
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "election_epoch": 5,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "quorum": [
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        0
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    ],
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "quorum_names": [
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "compute-0"
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    ],
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "quorum_age": 7,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "monmap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "epoch": 1,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "min_mon_release_name": "tentacle",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_mons": 1
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "osdmap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "epoch": 1,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_osds": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_up_osds": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "osd_up_since": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_in_osds": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "osd_in_since": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_remapped_pgs": 0
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "pgmap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "pgs_by_state": [],
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_pgs": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_pools": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_objects": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "data_bytes": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "bytes_used": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "bytes_avail": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "bytes_total": 0
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "fsmap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "epoch": 1,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "btime": "2026-02-02T17:20:17:007731+0000",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "by_rank": [],
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "up:standby": 0
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "mgrmap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "available": false,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "num_standbys": 0,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "modules": [
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:            "iostat",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:            "nfs"
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        ],
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "services": {}
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "servicemap": {
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "epoch": 1,
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "modified": "2026-02-02T17:20:17.009633+0000",
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:        "services": {}
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    },
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]:    "progress_events": {}
Feb  2 12:20:26 np0005605476 optimistic_maxwell[75652]: }
Feb  2 12:20:26 np0005605476 systemd[1]: libpod-722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec.scope: Deactivated successfully.
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.422634472 +0000 UTC m=+0.323414744 container died 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:20:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3becf353950043f9af66b037e470a52750a3a18246b6e17f93b5abd431d3ffca-merged.mount: Deactivated successfully.
Feb  2 12:20:26 np0005605476 podman[75635]: 2026-02-02 17:20:26.451387683 +0000 UTC m=+0.352167955 container remove 722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:26 np0005605476 systemd[1]: libpod-conmon-722db4020925c01458388d2f9aca565d14fcdb72b726f3fc5cda2f94745b86ec.scope: Deactivated successfully.
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'snap_schedule'
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'stats'
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'status'
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'telegraf'
Feb  2 12:20:26 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'telemetry'
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'volumes'
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: ms_deliver_dispatch: unhandled message 0x55735fadf860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hccdnu
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr handle_mgr_map Activating!
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.hccdnu(active, starting, since 0.00835164s)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr handle_mgr_map I am now activating
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mds metadata"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mon metadata"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hccdnu", "id": "compute-0.hccdnu"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mgr metadata", "who": "compute-0.hccdnu", "id": "compute-0.hccdnu"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: balancer
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer INFO root] Starting
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: crash
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:20:27
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Manager daemon compute-0.hccdnu is now available
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [balancer INFO root] No pools available
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: devicehealth
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Starting
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: iostat
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: nfs
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: orchestrator
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: pg_autoscaler
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: progress
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [progress INFO root] Loading...
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [progress INFO root] No stored events to load
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [progress INFO root] Loaded [] historic events
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] recovery thread starting
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] starting setup
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: rbd_support
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: status
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: telemetry
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] PerfHandler: starting
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TaskHandler: starting
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] setup complete
Feb  2 12:20:27 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: volumes
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: Activating manager daemon compute-0.hccdnu
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: Manager daemon compute-0.hccdnu is now available
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} : dispatch
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:27 np0005605476 ceph-mon[75197]: from='mgr.14102 192.168.122.100:0/4093669547' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:28 np0005605476 podman[75768]: 2026-02-02 17:20:28.531364686 +0000 UTC m=+0.059345095 container create 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:20:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.hccdnu(active, since 1.02131s)
Feb  2 12:20:28 np0005605476 systemd[1]: Started libpod-conmon-300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853.scope.
Feb  2 12:20:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709467ea20df024fb6e0e56dac91f54d96f937927908150fbebf7281222f504/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709467ea20df024fb6e0e56dac91f54d96f937927908150fbebf7281222f504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709467ea20df024fb6e0e56dac91f54d96f937927908150fbebf7281222f504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:28 np0005605476 podman[75768]: 2026-02-02 17:20:28.508559043 +0000 UTC m=+0.036539522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:28 np0005605476 podman[75768]: 2026-02-02 17:20:28.620363987 +0000 UTC m=+0.148344446 container init 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:28 np0005605476 podman[75768]: 2026-02-02 17:20:28.624438942 +0000 UTC m=+0.152419371 container start 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:28 np0005605476 podman[75768]: 2026-02-02 17:20:28.628826965 +0000 UTC m=+0.156807414 container attach 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:20:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 12:20:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591715879' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 12:20:29 np0005605476 happy_galois[75785]: 
Feb  2 12:20:29 np0005605476 happy_galois[75785]: {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "health": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "status": "HEALTH_OK",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "checks": {},
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "mutes": []
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "election_epoch": 5,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "quorum": [
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        0
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    ],
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "quorum_names": [
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "compute-0"
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    ],
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "quorum_age": 10,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "monmap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "epoch": 1,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "min_mon_release_name": "tentacle",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_mons": 1
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "osdmap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "epoch": 1,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_osds": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_up_osds": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "osd_up_since": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_in_osds": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "osd_in_since": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_remapped_pgs": 0
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "pgmap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "pgs_by_state": [],
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_pgs": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_pools": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_objects": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "data_bytes": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "bytes_used": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "bytes_avail": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "bytes_total": 0
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "fsmap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "epoch": 1,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "btime": "2026-02-02T17:20:17:007731+0000",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "by_rank": [],
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "up:standby": 0
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "mgrmap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "available": true,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "num_standbys": 0,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "modules": [
Feb  2 12:20:29 np0005605476 happy_galois[75785]:            "iostat",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:            "nfs"
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        ],
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "services": {}
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "servicemap": {
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "epoch": 1,
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "modified": "2026-02-02T17:20:17.009633+0000",
Feb  2 12:20:29 np0005605476 happy_galois[75785]:        "services": {}
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    },
Feb  2 12:20:29 np0005605476 happy_galois[75785]:    "progress_events": {}
Feb  2 12:20:29 np0005605476 happy_galois[75785]: }
Feb  2 12:20:29 np0005605476 systemd[1]: libpod-300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853.scope: Deactivated successfully.
Feb  2 12:20:29 np0005605476 conmon[75785]: conmon 300519458e8593d6bbe4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853.scope/container/memory.events
Feb  2 12:20:29 np0005605476 podman[75768]: 2026-02-02 17:20:29.128732637 +0000 UTC m=+0.656713036 container died 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:20:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3709467ea20df024fb6e0e56dac91f54d96f937927908150fbebf7281222f504-merged.mount: Deactivated successfully.
Feb  2 12:20:29 np0005605476 podman[75768]: 2026-02-02 17:20:29.159267368 +0000 UTC m=+0.687247767 container remove 300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853 (image=quay.io/ceph/ceph:v20, name=happy_galois, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:20:29 np0005605476 systemd[1]: libpod-conmon-300519458e8593d6bbe40395f1a1c7951c895789a3a68afe46fa19cbdd5fa853.scope: Deactivated successfully.
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.212032907 +0000 UTC m=+0.037629733 container create 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:20:29 np0005605476 systemd[1]: Started libpod-conmon-463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3.scope.
Feb  2 12:20:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092fc4f627d8c945c160ae3f4a844a31206a7e8d8400b88c4d7cf36619a7d3ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092fc4f627d8c945c160ae3f4a844a31206a7e8d8400b88c4d7cf36619a7d3ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092fc4f627d8c945c160ae3f4a844a31206a7e8d8400b88c4d7cf36619a7d3ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092fc4f627d8c945c160ae3f4a844a31206a7e8d8400b88c4d7cf36619a7d3ad/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.273768028 +0000 UTC m=+0.099364864 container init 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.27984251 +0000 UTC m=+0.105439336 container start 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.2837668 +0000 UTC m=+0.109363646 container attach 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.195752348 +0000 UTC m=+0.021349184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:29 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:29 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.hccdnu(active, since 2s)
Feb  2 12:20:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 12:20:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/608841720' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:20:29 np0005605476 inspiring_herschel[75838]: 
Feb  2 12:20:29 np0005605476 inspiring_herschel[75838]: [global]
Feb  2 12:20:29 np0005605476 inspiring_herschel[75838]: #011fsid = eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:20:29 np0005605476 inspiring_herschel[75838]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 12:20:29 np0005605476 inspiring_herschel[75838]: #011osd_crush_chooseleaf_type = 0
Feb  2 12:20:29 np0005605476 systemd[1]: libpod-463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3.scope: Deactivated successfully.
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.701611417 +0000 UTC m=+0.527208263 container died 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-092fc4f627d8c945c160ae3f4a844a31206a7e8d8400b88c4d7cf36619a7d3ad-merged.mount: Deactivated successfully.
Feb  2 12:20:29 np0005605476 podman[75822]: 2026-02-02 17:20:29.738251491 +0000 UTC m=+0.563848357 container remove 463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3 (image=quay.io/ceph/ceph:v20, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:29 np0005605476 systemd[1]: libpod-conmon-463e66ecbf28477af5dbfa46194d31f44c1687a11c90267d2c171a46800b46d3.scope: Deactivated successfully.
Feb  2 12:20:29 np0005605476 podman[75874]: 2026-02-02 17:20:29.79528273 +0000 UTC m=+0.042604303 container create 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:20:29 np0005605476 systemd[1]: Started libpod-conmon-427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf.scope.
Feb  2 12:20:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d348b547b6c05a3255f8773447fc49175d1aabb809dcbfb262d09a0a161246b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d348b547b6c05a3255f8773447fc49175d1aabb809dcbfb262d09a0a161246b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d348b547b6c05a3255f8773447fc49175d1aabb809dcbfb262d09a0a161246b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:29 np0005605476 podman[75874]: 2026-02-02 17:20:29.771687014 +0000 UTC m=+0.019008607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:29 np0005605476 podman[75874]: 2026-02-02 17:20:29.869633697 +0000 UTC m=+0.116955280 container init 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:29 np0005605476 podman[75874]: 2026-02-02 17:20:29.873077244 +0000 UTC m=+0.120398817 container start 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:29 np0005605476 podman[75874]: 2026-02-02 17:20:29.875995657 +0000 UTC m=+0.123317250 container attach 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1589755738' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1589755738' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.hccdnu(active, since 3s)
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/608841720' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:20:30 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1589755738' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  2 12:20:30 np0005605476 systemd[1]: libpod-427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf.scope: Deactivated successfully.
Feb  2 12:20:30 np0005605476 podman[75874]: 2026-02-02 17:20:30.575283713 +0000 UTC m=+0.822605286 container died 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:30 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3d348b547b6c05a3255f8773447fc49175d1aabb809dcbfb262d09a0a161246b-merged.mount: Deactivated successfully.
Feb  2 12:20:30 np0005605476 podman[75874]: 2026-02-02 17:20:30.604808146 +0000 UTC m=+0.852129719 container remove 427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf (image=quay.io/ceph/ceph:v20, name=dreamy_grothendieck, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:20:30 np0005605476 systemd[1]: libpod-conmon-427ac21c482583fcc46414a793a19f15c4599edb66d53c6bcf9e13805f6e3bdf.scope: Deactivated successfully.
Feb  2 12:20:30 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: ignoring --setuser ceph since I am not root
Feb  2 12:20:30 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: ignoring --setgroup ceph since I am not root
Feb  2 12:20:30 np0005605476 ceph-mgr[75493]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 12:20:30 np0005605476 ceph-mgr[75493]: pidfile_write: ignore empty --pid-file
Feb  2 12:20:30 np0005605476 podman[75928]: 2026-02-02 17:20:30.67017698 +0000 UTC m=+0.049303282 container create bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:20:30 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'alerts'
Feb  2 12:20:30 np0005605476 systemd[1]: Started libpod-conmon-bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2.scope.
Feb  2 12:20:30 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed8ce7e6c1f70d2fdece881c005c6c3b61a98408804dfa08a7d8286067d77e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed8ce7e6c1f70d2fdece881c005c6c3b61a98408804dfa08a7d8286067d77e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed8ce7e6c1f70d2fdece881c005c6c3b61a98408804dfa08a7d8286067d77e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:30 np0005605476 podman[75928]: 2026-02-02 17:20:30.64323646 +0000 UTC m=+0.022362782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:30 np0005605476 podman[75928]: 2026-02-02 17:20:30.742969603 +0000 UTC m=+0.122095925 container init bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:30 np0005605476 podman[75928]: 2026-02-02 17:20:30.746681448 +0000 UTC m=+0.125807770 container start bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb  2 12:20:30 np0005605476 podman[75928]: 2026-02-02 17:20:30.749910949 +0000 UTC m=+0.129037271 container attach bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:20:30 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'balancer'
Feb  2 12:20:30 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'cephadm'
Feb  2 12:20:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 12:20:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069454236' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]: {
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]:    "epoch": 5,
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]:    "available": true,
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]:    "active_name": "compute-0.hccdnu",
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]:    "num_standby": 0
Feb  2 12:20:31 np0005605476 gifted_mayer[75964]: }
Feb  2 12:20:31 np0005605476 systemd[1]: libpod-bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2.scope: Deactivated successfully.
Feb  2 12:20:31 np0005605476 podman[75928]: 2026-02-02 17:20:31.231557595 +0000 UTC m=+0.610683907 container died bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5eed8ce7e6c1f70d2fdece881c005c6c3b61a98408804dfa08a7d8286067d77e-merged.mount: Deactivated successfully.
Feb  2 12:20:31 np0005605476 podman[75928]: 2026-02-02 17:20:31.266192782 +0000 UTC m=+0.645319124 container remove bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2 (image=quay.io/ceph/ceph:v20, name=gifted_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:31 np0005605476 systemd[1]: libpod-conmon-bd31341443b3a8ad63162bd0fdaefb7ca4a8b947bd001df77c54d9d7eddefbc2.scope: Deactivated successfully.
Feb  2 12:20:31 np0005605476 podman[76013]: 2026-02-02 17:20:31.313331052 +0000 UTC m=+0.030992326 container create 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:20:31 np0005605476 systemd[1]: Started libpod-conmon-6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e.scope.
Feb  2 12:20:31 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7327867d0d80e551030a8b77d1c6eecfd8b56235aea002d23fec53ef2e14fdd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7327867d0d80e551030a8b77d1c6eecfd8b56235aea002d23fec53ef2e14fdd8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7327867d0d80e551030a8b77d1c6eecfd8b56235aea002d23fec53ef2e14fdd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:31 np0005605476 podman[76013]: 2026-02-02 17:20:31.369479625 +0000 UTC m=+0.087140909 container init 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:20:31 np0005605476 podman[76013]: 2026-02-02 17:20:31.373410686 +0000 UTC m=+0.091071960 container start 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:20:31 np0005605476 podman[76013]: 2026-02-02 17:20:31.379080996 +0000 UTC m=+0.096742300 container attach 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:20:31 np0005605476 podman[76013]: 2026-02-02 17:20:31.299077969 +0000 UTC m=+0.016739263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:31 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'crash'
Feb  2 12:20:31 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1589755738' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 12:20:31 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'dashboard'
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'devicehealth'
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 12:20:32 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 12:20:32 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 12:20:32 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]:  from numpy import show_config as show_numpy_config
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'influx'
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'insights'
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'iostat'
Feb  2 12:20:32 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'k8sevents'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'localpool'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'mirroring'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'nfs'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'orchestrator'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 12:20:33 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'osd_support'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'progress'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'prometheus'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rbd_support'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rgw'
Feb  2 12:20:34 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'rook'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'selftest'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'smb'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'snap_schedule'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'stats'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'status'
Feb  2 12:20:35 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'telegraf'
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'telemetry'
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr[py] Loading python module 'volumes'
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hccdnu restarted
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hccdnu
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: ms_deliver_dispatch: unhandled message 0x5596b1eb4000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr handle_mgr_map Activating!
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr handle_mgr_map I am now activating
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.hccdnu(active, starting, since 0.015354s)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hccdnu", "id": "compute-0.hccdnu"} v 0)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mgr metadata", "who": "compute-0.hccdnu", "id": "compute-0.hccdnu"} : dispatch
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mds metadata"} : dispatch
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata"} : dispatch
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mon metadata"} : dispatch
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: balancer
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Manager daemon compute-0.hccdnu is now available
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Starting
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:20:36
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:20:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] No pools available
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: Active manager daemon compute-0.hccdnu restarted
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: Activating manager daemon compute-0.hccdnu
Feb  2 12:20:36 np0005605476 ceph-mon[75197]: Manager daemon compute-0.hccdnu is now available
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: cephadm
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: crash
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: devicehealth
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: iostat
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: nfs
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: orchestrator
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: pg_autoscaler
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Starting
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: progress
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [progress INFO root] Loading...
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [progress INFO root] No stored events to load
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [progress INFO root] Loaded [] historic events
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] recovery thread starting
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] starting setup
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: rbd_support
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: status
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} v 0)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: telemetry
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} : dispatch
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] PerfHandler: starting
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TaskHandler: starting
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} v 0)
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} : dispatch
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] setup complete
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: mgr load Constructed class from module: volumes
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb  2 12:20:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.hccdnu(active, since 1.02415s)
Feb  2 12:20:37 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb  2 12:20:37 np0005605476 modest_montalcini[76029]: {
Feb  2 12:20:37 np0005605476 modest_montalcini[76029]:    "mgrmap_epoch": 7,
Feb  2 12:20:37 np0005605476 modest_montalcini[76029]:    "initialized": true
Feb  2 12:20:37 np0005605476 modest_montalcini[76029]: }
Feb  2 12:20:37 np0005605476 systemd[1]: libpod-6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e.scope: Deactivated successfully.
Feb  2 12:20:37 np0005605476 podman[76013]: 2026-02-02 17:20:37.680600364 +0000 UTC m=+6.398261628 container died 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:37 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7327867d0d80e551030a8b77d1c6eecfd8b56235aea002d23fec53ef2e14fdd8-merged.mount: Deactivated successfully.
Feb  2 12:20:37 np0005605476 podman[76013]: 2026-02-02 17:20:37.710764375 +0000 UTC m=+6.428425649 container remove 6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e (image=quay.io/ceph/ceph:v20, name=modest_montalcini, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:37 np0005605476 systemd[1]: libpod-conmon-6260db1045723ab432a8f7cce2d7f9d798257e50e64d5f7fbfd19b38a528719e.scope: Deactivated successfully.
Feb  2 12:20:37 np0005605476 podman[76177]: 2026-02-02 17:20:37.769662737 +0000 UTC m=+0.041706258 container create ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:37 np0005605476 systemd[1]: Started libpod-conmon-ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4.scope.
Feb  2 12:20:37 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2353ea8c8d868ebf9708165a7245644afac25883e6b7503d7a6919b19a27f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2353ea8c8d868ebf9708165a7245644afac25883e6b7503d7a6919b19a27f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2353ea8c8d868ebf9708165a7245644afac25883e6b7503d7a6919b19a27f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:37 np0005605476 podman[76177]: 2026-02-02 17:20:37.747850251 +0000 UTC m=+0.019893792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:37 np0005605476 podman[76177]: 2026-02-02 17:20:37.849711975 +0000 UTC m=+0.121755496 container init ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:37 np0005605476 podman[76177]: 2026-02-02 17:20:37.854457339 +0000 UTC m=+0.126500860 container start ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:37 np0005605476 podman[76177]: 2026-02-02 17:20:37.858093911 +0000 UTC m=+0.130137452 container attach ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1080850630' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: Found migration_current of "None". Setting to last migration.
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/mirror_snapshot_schedule"} : dispatch
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hccdnu/trash_purge_schedule"} : dispatch
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1080850630' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  2 12:20:38 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1080850630' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  2 12:20:38 np0005605476 reverent_benz[76193]: module 'orchestrator' is already enabled (always-on)
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.hccdnu(active, since 2s)
Feb  2 12:20:38 np0005605476 systemd[1]: libpod-ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4.scope: Deactivated successfully.
Feb  2 12:20:38 np0005605476 podman[76177]: 2026-02-02 17:20:38.670637251 +0000 UTC m=+0.942680772 container died ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9c2353ea8c8d868ebf9708165a7245644afac25883e6b7503d7a6919b19a27f2-merged.mount: Deactivated successfully.
Feb  2 12:20:38 np0005605476 podman[76177]: 2026-02-02 17:20:38.699365431 +0000 UTC m=+0.971408962 container remove ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4 (image=quay.io/ceph/ceph:v20, name=reverent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:38 np0005605476 systemd[1]: libpod-conmon-ad99439afee2bda3524ae7ea2eff63ffcc8ded820a882cbdfb1bb3547719ebb4.scope: Deactivated successfully.
Feb  2 12:20:38 np0005605476 podman[76230]: 2026-02-02 17:20:38.764563701 +0000 UTC m=+0.049024854 container create b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:20:38 np0005605476 systemd[1]: Started libpod-conmon-b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc.scope.
Feb  2 12:20:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669a928899d6dc05ea11b56be0b6ff5fbbe07284386b965a7724f3320bdb96be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669a928899d6dc05ea11b56be0b6ff5fbbe07284386b965a7724f3320bdb96be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669a928899d6dc05ea11b56be0b6ff5fbbe07284386b965a7724f3320bdb96be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:38 np0005605476 podman[76230]: 2026-02-02 17:20:38.835552513 +0000 UTC m=+0.120013666 container init b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:38 np0005605476 podman[76230]: 2026-02-02 17:20:38.748806856 +0000 UTC m=+0.033268019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:38 np0005605476 podman[76230]: 2026-02-02 17:20:38.839032201 +0000 UTC m=+0.123493354 container start b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:20:38 np0005605476 podman[76230]: 2026-02-02 17:20:38.842033716 +0000 UTC m=+0.126494869 container attach b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:20:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019899164 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:20:39 np0005605476 systemd[1]: libpod-b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc.scope: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76230]: 2026-02-02 17:20:39.31294482 +0000 UTC m=+0.597405963 container died b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:20:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-669a928899d6dc05ea11b56be0b6ff5fbbe07284386b965a7724f3320bdb96be-merged.mount: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76230]: 2026-02-02 17:20:39.355396397 +0000 UTC m=+0.639857530 container remove b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc (image=quay.io/ceph/ceph:v20, name=festive_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:39 np0005605476 systemd[1]: libpod-conmon-b8d1182fea66d9bc66645ed755d8576506c6bb55203b1f39dc43e21b3726acbc.scope: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.394973544 +0000 UTC m=+0.029008210 container create 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:20:39 np0005605476 systemd[1]: Started libpod-conmon-87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d.scope.
Feb  2 12:20:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f9e7c24f6b46d086a6d445c2c6255bf4fcdc9b42fd3de1871a4bd8e5f985bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f9e7c24f6b46d086a6d445c2c6255bf4fcdc9b42fd3de1871a4bd8e5f985bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f9e7c24f6b46d086a6d445c2c6255bf4fcdc9b42fd3de1871a4bd8e5f985bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.445270933 +0000 UTC m=+0.079305609 container init 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.450341886 +0000 UTC m=+0.084376542 container start 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1080850630' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.453728491 +0000 UTC m=+0.087763247 container attach 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.380789464 +0000 UTC m=+0.014824150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO cherrypy.error] [02/Feb/2026:17:20:39] ENGINE Bus STARTING
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : [02/Feb/2026:17:20:39] ENGINE Bus STARTING
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO cherrypy.error] [02/Feb/2026:17:20:39] ENGINE Serving on http://192.168.122.100:8765
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : [02/Feb/2026:17:20:39] ENGINE Serving on http://192.168.122.100:8765
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO cherrypy.error] [02/Feb/2026:17:20:39] ENGINE Serving on https://192.168.122.100:7150
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : [02/Feb/2026:17:20:39] ENGINE Serving on https://192.168.122.100:7150
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO cherrypy.error] [02/Feb/2026:17:20:39] ENGINE Bus STARTED
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : [02/Feb/2026:17:20:39] ENGINE Bus STARTED
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO cherrypy.error] [02/Feb/2026:17:20:39] ENGINE Client ('192.168.122.100', 49654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : [02/Feb/2026:17:20:39] ENGINE Client ('192.168.122.100', 49654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Set ssh ssh_user
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb  2 12:20:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Set ssh ssh_config
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb  2 12:20:39 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb  2 12:20:39 np0005605476 practical_beaver[76299]: ssh user set to ceph-admin. sudo will be used
Feb  2 12:20:39 np0005605476 systemd[1]: libpod-87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d.scope: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.862602425 +0000 UTC m=+0.496637101 container died 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a6f9e7c24f6b46d086a6d445c2c6255bf4fcdc9b42fd3de1871a4bd8e5f985bf-merged.mount: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76283]: 2026-02-02 17:20:39.890406099 +0000 UTC m=+0.524440765 container remove 87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d (image=quay.io/ceph/ceph:v20, name=practical_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:20:39 np0005605476 systemd[1]: libpod-conmon-87267cc656fa5f1b457df601db6b0ae4b745541ddb2a53b1c405199bd6a1467d.scope: Deactivated successfully.
Feb  2 12:20:39 np0005605476 podman[76360]: 2026-02-02 17:20:39.939653539 +0000 UTC m=+0.035160683 container create e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:20:39 np0005605476 systemd[1]: Started libpod-conmon-e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759.scope.
Feb  2 12:20:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:39.920386295 +0000 UTC m=+0.015893429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:40.022328811 +0000 UTC m=+0.117835945 container init e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:40.028030891 +0000 UTC m=+0.123537985 container start e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:40.03082518 +0000 UTC m=+0.126332304 container attach e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Set ssh ssh_identity_key
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Set ssh private key
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Set ssh private key
Feb  2 12:20:40 np0005605476 systemd[1]: libpod-e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759.scope: Deactivated successfully.
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:40.42714195 +0000 UTC m=+0.522649224 container died e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:20:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ec99121b3170316b45c5e239d986c8a13e261bb25fda6e315f8d7fbddbbe22f3-merged.mount: Deactivated successfully.
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: [02/Feb/2026:17:20:39] ENGINE Bus STARTING
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:40 np0005605476 podman[76360]: 2026-02-02 17:20:40.457374393 +0000 UTC m=+0.552881497 container remove e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759 (image=quay.io/ceph/ceph:v20, name=brave_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:20:40 np0005605476 systemd[1]: libpod-conmon-e55615e69a5eca8320d97096153d23a6b5b5c3a126757ed4490561ab9bf57759.scope: Deactivated successfully.
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.506848078 +0000 UTC m=+0.038708233 container create 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 12:20:40 np0005605476 systemd[1]: Started libpod-conmon-98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d.scope.
Feb  2 12:20:40 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.573411186 +0000 UTC m=+0.105271321 container init 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.579526218 +0000 UTC m=+0.111386343 container start 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.487612666 +0000 UTC m=+0.019472821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.583313845 +0000 UTC m=+0.115174010 container attach 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb  2 12:20:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb  2 12:20:40 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb  2 12:20:40 np0005605476 systemd[1]: libpod-98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d.scope: Deactivated successfully.
Feb  2 12:20:40 np0005605476 podman[76414]: 2026-02-02 17:20:40.980816718 +0000 UTC m=+0.512676843 container died 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:41 np0005605476 systemd[1]: var-lib-containers-storage-overlay-953cee86e95b02701fa526c615910c24c3824345d81f285ad4163316a9920e4c-merged.mount: Deactivated successfully.
Feb  2 12:20:41 np0005605476 podman[76414]: 2026-02-02 17:20:41.012378699 +0000 UTC m=+0.544238824 container remove 98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d (image=quay.io/ceph/ceph:v20, name=vigorous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:20:41 np0005605476 systemd[1]: libpod-conmon-98592ceaa1e45e138960d12886433a0a92eac4df21c02c89309a8faedb321f5d.scope: Deactivated successfully.
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.059003964 +0000 UTC m=+0.034679859 container create f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:41 np0005605476 systemd[1]: Started libpod-conmon-f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca.scope.
Feb  2 12:20:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd65ac48e688b814027ddbbb88275ed3ba2a56e9d439980421695db8e6b74220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd65ac48e688b814027ddbbb88275ed3ba2a56e9d439980421695db8e6b74220/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd65ac48e688b814027ddbbb88275ed3ba2a56e9d439980421695db8e6b74220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.136541271 +0000 UTC m=+0.112217226 container init f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.041587353 +0000 UTC m=+0.017263238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.143274311 +0000 UTC m=+0.118950196 container start f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.146103981 +0000 UTC m=+0.121779856 container attach f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: [02/Feb/2026:17:20:39] ENGINE Serving on http://192.168.122.100:8765
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: [02/Feb/2026:17:20:39] ENGINE Serving on https://192.168.122.100:7150
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: [02/Feb/2026:17:20:39] ENGINE Bus STARTED
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: [02/Feb/2026:17:20:39] ENGINE Client ('192.168.122.100', 49654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: Set ssh ssh_user
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: Set ssh ssh_config
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: ssh user set to ceph-admin. sudo will be used
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: Set ssh ssh_identity_key
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: Set ssh private key
Feb  2 12:20:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:41 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:41 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:41 np0005605476 eager_mendeleev[76482]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7Bd1fwg06lztoZvbkPROmhRtt+Q1fn5E8lQA9G0OtVL8MD8DaX0h2Txeus+wbMEeOjquhs7JvX84T/xQQvqlq6qVt+ihzSaDL+Z6RYq8zGWS0YSRzUMmgEjDXf4jlfg2sYgHrc4+IYE1KdJOc82EePqkI9GBiiafERiOiS+2CH34DbMT1Az3fqgqLn5HsZSzIzio79DpfBGfbzz9iyA/wVyTFifVcJ+pGC4exY45LhiabWvoVWQbpgFAlw8/MEiA1W0Acx8F88MqK3nqBCkHeCzBhpE2XRTpgsIvAWEmTCq2RKzVSOhHRYRSIncxjBLSJorXCKmv/7kzaUY0H73kID7ozAg+1m9nERPozAHFWXeTroygMt1ukWP/YzSnc9jKYzzkOv/wo9RnB3pRoLEx3yN/u4hvlbKhrZoeB/m65nUM8ZcztZT3xfqjwk6zEYJ0s2NhHqZRLgsWK29zx8jPJ45am8HKHQN7UpdqO4XTN7hdlzDAkP+71opTubWE6GOE= zuul@controller
Feb  2 12:20:41 np0005605476 systemd[1]: libpod-f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca.scope: Deactivated successfully.
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.580073472 +0000 UTC m=+0.555749337 container died f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:20:41 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bd65ac48e688b814027ddbbb88275ed3ba2a56e9d439980421695db8e6b74220-merged.mount: Deactivated successfully.
Feb  2 12:20:41 np0005605476 podman[76466]: 2026-02-02 17:20:41.615627165 +0000 UTC m=+0.591303030 container remove f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca (image=quay.io/ceph/ceph:v20, name=eager_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:41 np0005605476 systemd[1]: libpod-conmon-f24e0a997c34d638f44f035664c0bfd2c1bfcd8c34c0be5dda534d017447deca.scope: Deactivated successfully.
Feb  2 12:20:41 np0005605476 podman[76520]: 2026-02-02 17:20:41.663493355 +0000 UTC m=+0.032685953 container create fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:20:41 np0005605476 systemd[1]: Started libpod-conmon-fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5.scope.
Feb  2 12:20:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ef405c269164cb784a629fbdeb5eb3c1e311af4621ddaf0d8dd050d8576cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ef405c269164cb784a629fbdeb5eb3c1e311af4621ddaf0d8dd050d8576cbc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ef405c269164cb784a629fbdeb5eb3c1e311af4621ddaf0d8dd050d8576cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:41 np0005605476 podman[76520]: 2026-02-02 17:20:41.722439228 +0000 UTC m=+0.091631836 container init fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:20:41 np0005605476 podman[76520]: 2026-02-02 17:20:41.727737347 +0000 UTC m=+0.096929945 container start fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:20:41 np0005605476 podman[76520]: 2026-02-02 17:20:41.730716411 +0000 UTC m=+0.099909049 container attach fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:20:41 np0005605476 podman[76520]: 2026-02-02 17:20:41.649557652 +0000 UTC m=+0.018750270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:42 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:42 np0005605476 systemd-logind[799]: New session 20 of user ceph-admin.
Feb  2 12:20:42 np0005605476 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 12:20:42 np0005605476 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 12:20:42 np0005605476 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 12:20:42 np0005605476 systemd[1]: Starting User Manager for UID 42477...
Feb  2 12:20:42 np0005605476 ceph-mon[75197]: Set ssh ssh_identity_pub
Feb  2 12:20:42 np0005605476 systemd[76566]: Queued start job for default target Main User Target.
Feb  2 12:20:42 np0005605476 systemd[76566]: Created slice User Application Slice.
Feb  2 12:20:42 np0005605476 systemd[76566]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 12:20:42 np0005605476 systemd[76566]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 12:20:42 np0005605476 systemd[76566]: Reached target Paths.
Feb  2 12:20:42 np0005605476 systemd[76566]: Reached target Timers.
Feb  2 12:20:42 np0005605476 systemd[76566]: Starting D-Bus User Message Bus Socket...
Feb  2 12:20:42 np0005605476 systemd[76566]: Starting Create User's Volatile Files and Directories...
Feb  2 12:20:42 np0005605476 systemd[76566]: Finished Create User's Volatile Files and Directories.
Feb  2 12:20:42 np0005605476 systemd[76566]: Listening on D-Bus User Message Bus Socket.
Feb  2 12:20:42 np0005605476 systemd[76566]: Reached target Sockets.
Feb  2 12:20:42 np0005605476 systemd[76566]: Reached target Basic System.
Feb  2 12:20:42 np0005605476 systemd[76566]: Reached target Main User Target.
Feb  2 12:20:42 np0005605476 systemd[76566]: Startup finished in 116ms.
Feb  2 12:20:42 np0005605476 systemd[1]: Started User Manager for UID 42477.
Feb  2 12:20:42 np0005605476 systemd[1]: Started Session 20 of User ceph-admin.
Feb  2 12:20:42 np0005605476 systemd-logind[799]: New session 22 of user ceph-admin.
Feb  2 12:20:42 np0005605476 systemd[1]: Started Session 22 of User ceph-admin.
Feb  2 12:20:42 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:42 np0005605476 systemd-logind[799]: New session 23 of user ceph-admin.
Feb  2 12:20:42 np0005605476 systemd[1]: Started Session 23 of User ceph-admin.
Feb  2 12:20:43 np0005605476 systemd-logind[799]: New session 24 of user ceph-admin.
Feb  2 12:20:43 np0005605476 systemd[1]: Started Session 24 of User ceph-admin.
Feb  2 12:20:43 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb  2 12:20:43 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb  2 12:20:43 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:43 np0005605476 systemd-logind[799]: New session 25 of user ceph-admin.
Feb  2 12:20:43 np0005605476 systemd[1]: Started Session 25 of User ceph-admin.
Feb  2 12:20:43 np0005605476 systemd-logind[799]: New session 26 of user ceph-admin.
Feb  2 12:20:43 np0005605476 systemd[1]: Started Session 26 of User ceph-admin.
Feb  2 12:20:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052587 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:20:44 np0005605476 systemd-logind[799]: New session 27 of user ceph-admin.
Feb  2 12:20:44 np0005605476 systemd[1]: Started Session 27 of User ceph-admin.
Feb  2 12:20:44 np0005605476 ceph-mon[75197]: Deploying cephadm binary to compute-0
Feb  2 12:20:44 np0005605476 systemd-logind[799]: New session 28 of user ceph-admin.
Feb  2 12:20:44 np0005605476 systemd[1]: Started Session 28 of User ceph-admin.
Feb  2 12:20:44 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:44 np0005605476 systemd-logind[799]: New session 29 of user ceph-admin.
Feb  2 12:20:44 np0005605476 systemd[1]: Started Session 29 of User ceph-admin.
Feb  2 12:20:45 np0005605476 systemd-logind[799]: New session 30 of user ceph-admin.
Feb  2 12:20:45 np0005605476 systemd[1]: Started Session 30 of User ceph-admin.
Feb  2 12:20:45 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:46 np0005605476 systemd-logind[799]: New session 31 of user ceph-admin.
Feb  2 12:20:46 np0005605476 systemd[1]: Started Session 31 of User ceph-admin.
Feb  2 12:20:46 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:46 np0005605476 systemd-logind[799]: New session 32 of user ceph-admin.
Feb  2 12:20:46 np0005605476 systemd[1]: Started Session 32 of User ceph-admin.
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Added host compute-0
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:20:47 np0005605476 blissful_newton[76536]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 12:20:47 np0005605476 systemd[1]: libpod-fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5.scope: Deactivated successfully.
Feb  2 12:20:47 np0005605476 podman[76520]: 2026-02-02 17:20:47.322380475 +0000 UTC m=+5.691573093 container died fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-64ef405c269164cb784a629fbdeb5eb3c1e311af4621ddaf0d8dd050d8576cbc-merged.mount: Deactivated successfully.
Feb  2 12:20:47 np0005605476 podman[76520]: 2026-02-02 17:20:47.363262858 +0000 UTC m=+5.732455466 container remove fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5 (image=quay.io/ceph/ceph:v20, name=blissful_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:47 np0005605476 systemd[1]: libpod-conmon-fd6677a46e4491ee3f85edde271cab735161cde3f97f44734378c196b6f448c5.scope: Deactivated successfully.
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.415633965 +0000 UTC m=+0.038035184 container create 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:47 np0005605476 systemd[1]: Started libpod-conmon-35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2.scope.
Feb  2 12:20:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebcf281e976a2947e58b2752da297913a26941e0b5a334b50cb6877a3cc0c67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebcf281e976a2947e58b2752da297913a26941e0b5a334b50cb6877a3cc0c67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebcf281e976a2947e58b2752da297913a26941e0b5a334b50cb6877a3cc0c67/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.486431092 +0000 UTC m=+0.108832321 container init 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.395206209 +0000 UTC m=+0.017607458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.492077582 +0000 UTC m=+0.114478831 container start 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.495706374 +0000 UTC m=+0.118107593 container attach 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb  2 12:20:47 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 12:20:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:47 np0005605476 nice_solomon[76999]: Scheduled mon update...
Feb  2 12:20:47 np0005605476 systemd[1]: libpod-35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2.scope: Deactivated successfully.
Feb  2 12:20:47 np0005605476 podman[76971]: 2026-02-02 17:20:47.985867281 +0000 UTC m=+0.608268490 container died 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:20:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7ebcf281e976a2947e58b2752da297913a26941e0b5a334b50cb6877a3cc0c67-merged.mount: Deactivated successfully.
Feb  2 12:20:48 np0005605476 podman[76971]: 2026-02-02 17:20:48.025023945 +0000 UTC m=+0.647425154 container remove 35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2 (image=quay.io/ceph/ceph:v20, name=nice_solomon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:48 np0005605476 systemd[1]: libpod-conmon-35cc2e2ff7eae74668dc63b8496c212aeeda172f1fc0efc8d943cfc7a0e87af2.scope: Deactivated successfully.
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.07870379 +0000 UTC m=+0.039844765 container create d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:20:48 np0005605476 systemd[1]: Started libpod-conmon-d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c.scope.
Feb  2 12:20:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8a567d59942ab1b1efee8c05f8428669456b9ac68e8f880d9be1637bc19687/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8a567d59942ab1b1efee8c05f8428669456b9ac68e8f880d9be1637bc19687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8a567d59942ab1b1efee8c05f8428669456b9ac68e8f880d9be1637bc19687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.138502176 +0000 UTC m=+0.099643241 container init d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.142149949 +0000 UTC m=+0.103290954 container start d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.145795802 +0000 UTC m=+0.106936867 container attach d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.057299946 +0000 UTC m=+0.018440971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:48 np0005605476 podman[77036]: 2026-02-02 17:20:48.248929531 +0000 UTC m=+0.619093414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: Added host compute-0
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.356869436 +0000 UTC m=+0.048884760 container create c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:48 np0005605476 systemd[1]: Started libpod-conmon-c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358.scope.
Feb  2 12:20:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.419964326 +0000 UTC m=+0.111979660 container init c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.426379397 +0000 UTC m=+0.118394751 container start c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.43038217 +0000 UTC m=+0.122397524 container attach c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.337466519 +0000 UTC m=+0.029481853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:48 np0005605476 ecstatic_antonelli[77135]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  2 12:20:48 np0005605476 systemd[1]: libpod-c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358.scope: Deactivated successfully.
Feb  2 12:20:48 np0005605476 conmon[77135]: conmon c1a80d27cae985bb6078 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358.scope/container/memory.events
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.53566966 +0000 UTC m=+0.227684984 container died c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:20:48 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:48 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb  2 12:20:48 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ecf6bb660e5bc19cfd2d91b1985c5038b91343d7ea7af9324ec4022443000652-merged.mount: Deactivated successfully.
Feb  2 12:20:48 np0005605476 hungry_williamson[77081]: Scheduled mgr update...
Feb  2 12:20:48 np0005605476 podman[77119]: 2026-02-02 17:20:48.56863839 +0000 UTC m=+0.260653704 container remove c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358 (image=quay.io/ceph/ceph:v20, name=ecstatic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:48 np0005605476 systemd[1]: libpod-d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c.scope: Deactivated successfully.
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.577811149 +0000 UTC m=+0.538952124 container died d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:48 np0005605476 systemd[1]: libpod-conmon-c1a80d27cae985bb60781df0e6dea1033162d2732281120d23cc9911fb565358.scope: Deactivated successfully.
Feb  2 12:20:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3f8a567d59942ab1b1efee8c05f8428669456b9ac68e8f880d9be1637bc19687-merged.mount: Deactivated successfully.
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:48 np0005605476 podman[77064]: 2026-02-02 17:20:48.611443378 +0000 UTC m=+0.572584343 container remove d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c (image=quay.io/ceph/ceph:v20, name=hungry_williamson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:48 np0005605476 systemd[1]: libpod-conmon-d5195824b4758ad23c59b31405f922fa875ae93f751a4f6096449c73c7a4d99c.scope: Deactivated successfully.
Feb  2 12:20:48 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:48 np0005605476 podman[77165]: 2026-02-02 17:20:48.654905884 +0000 UTC m=+0.032595441 container create a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb  2 12:20:48 np0005605476 systemd[1]: Started libpod-conmon-a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd.scope.
Feb  2 12:20:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb119804d187fa50015e1865774ebfb1e5319c8114d70fab9b3de9b9d016fd4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb119804d187fa50015e1865774ebfb1e5319c8114d70fab9b3de9b9d016fd4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb119804d187fa50015e1865774ebfb1e5319c8114d70fab9b3de9b9d016fd4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:48 np0005605476 podman[77165]: 2026-02-02 17:20:48.700018756 +0000 UTC m=+0.077708313 container init a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:20:48 np0005605476 podman[77165]: 2026-02-02 17:20:48.705360047 +0000 UTC m=+0.083049594 container start a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:20:48 np0005605476 podman[77165]: 2026-02-02 17:20:48.708184436 +0000 UTC m=+0.085873993 container attach a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:20:48 np0005605476 podman[77165]: 2026-02-02 17:20:48.638442159 +0000 UTC m=+0.016131736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054702 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:20:49 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:49 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service crash spec with placement *
Feb  2 12:20:49 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 gracious_feistel[77212]: Scheduled crash update...
Feb  2 12:20:49 np0005605476 systemd[1]: libpod-a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd.scope: Deactivated successfully.
Feb  2 12:20:49 np0005605476 podman[77165]: 2026-02-02 17:20:49.154432723 +0000 UTC m=+0.532122310 container died a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:20:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-eb119804d187fa50015e1865774ebfb1e5319c8114d70fab9b3de9b9d016fd4b-merged.mount: Deactivated successfully.
Feb  2 12:20:49 np0005605476 podman[77165]: 2026-02-02 17:20:49.191509549 +0000 UTC m=+0.569199146 container remove a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd (image=quay.io/ceph/ceph:v20, name=gracious_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:49 np0005605476 systemd[1]: libpod-conmon-a77599c4460553b849d5da262d0d6c257d4f66c325075a65a76d50a1bd869dfd.scope: Deactivated successfully.
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.243750223 +0000 UTC m=+0.035767050 container create d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:20:49 np0005605476 systemd[1]: Started libpod-conmon-d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8.scope.
Feb  2 12:20:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2150322cf4c1bb1ade0b709e264b3c14ad915a63a1760c8306dd50553a9f71a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2150322cf4c1bb1ade0b709e264b3c14ad915a63a1760c8306dd50553a9f71a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2150322cf4c1bb1ade0b709e264b3c14ad915a63a1760c8306dd50553a9f71a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.225697734 +0000 UTC m=+0.017714591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.324264364 +0000 UTC m=+0.116281211 container init d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.329098081 +0000 UTC m=+0.121114908 container start d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.332092175 +0000 UTC m=+0.124109002 container attach d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb  2 12:20:49 np0005605476 podman[77404]: 2026-02-02 17:20:49.444695571 +0000 UTC m=+0.045019340 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb  2 12:20:49 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: Saving service mon spec with placement count:5
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: Saving service mgr spec with placement count:2
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 podman[77404]: 2026-02-02 17:20:49.564380148 +0000 UTC m=+0.164703897 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2275966021' entity='client.admin' 
Feb  2 12:20:49 np0005605476 systemd[1]: libpod-d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8.scope: Deactivated successfully.
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.748265245 +0000 UTC m=+0.540282072 container died d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2150322cf4c1bb1ade0b709e264b3c14ad915a63a1760c8306dd50553a9f71a2-merged.mount: Deactivated successfully.
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:49 np0005605476 podman[77339]: 2026-02-02 17:20:49.786955316 +0000 UTC m=+0.578972143 container remove d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8 (image=quay.io/ceph/ceph:v20, name=laughing_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:49 np0005605476 systemd[1]: libpod-conmon-d63234cff70a785531f135c6a66c2610df5559bb6fa1b0f081032218cda860c8.scope: Deactivated successfully.
Feb  2 12:20:49 np0005605476 podman[77514]: 2026-02-02 17:20:49.833337935 +0000 UTC m=+0.034342300 container create 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:49 np0005605476 systemd[1]: Started libpod-conmon-25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2.scope.
Feb  2 12:20:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0081ffcc8718066e04fc99cca362861f0c05ada3ff1c813dd342067d2adbd3ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0081ffcc8718066e04fc99cca362861f0c05ada3ff1c813dd342067d2adbd3ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0081ffcc8718066e04fc99cca362861f0c05ada3ff1c813dd342067d2adbd3ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:49 np0005605476 podman[77514]: 2026-02-02 17:20:49.906712285 +0000 UTC m=+0.107716670 container init 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:49 np0005605476 podman[77514]: 2026-02-02 17:20:49.910870282 +0000 UTC m=+0.111874647 container start 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:49 np0005605476 podman[77514]: 2026-02-02 17:20:49.81793452 +0000 UTC m=+0.018938905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:49 np0005605476 podman[77514]: 2026-02-02 17:20:49.914845274 +0000 UTC m=+0.115849649 container attach 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:20:50 np0005605476 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77613 (sysctl)
Feb  2 12:20:50 np0005605476 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb  2 12:20:50 np0005605476 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb  2 12:20:50 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:50 np0005605476 systemd[1]: libpod-25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2.scope: Deactivated successfully.
Feb  2 12:20:50 np0005605476 podman[77514]: 2026-02-02 17:20:50.372278358 +0000 UTC m=+0.573282753 container died 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:20:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0081ffcc8718066e04fc99cca362861f0c05ada3ff1c813dd342067d2adbd3ff-merged.mount: Deactivated successfully.
Feb  2 12:20:50 np0005605476 podman[77514]: 2026-02-02 17:20:50.422776432 +0000 UTC m=+0.623780827 container remove 25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2 (image=quay.io/ceph/ceph:v20, name=gifted_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:50 np0005605476 systemd[1]: libpod-conmon-25ac177ddcb264d0591b7edc50635e6fe187e00d3f43142375b2ca79eda5c0a2.scope: Deactivated successfully.
Feb  2 12:20:50 np0005605476 podman[77683]: 2026-02-02 17:20:50.486332095 +0000 UTC m=+0.047356077 container create ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:20:50 np0005605476 systemd[1]: Started libpod-conmon-ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466.scope.
Feb  2 12:20:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827dd04c9a69eb887b25f02fe4e4715c3c73fcabdf64d7d7a0ec10d336b90376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827dd04c9a69eb887b25f02fe4e4715c3c73fcabdf64d7d7a0ec10d336b90376/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827dd04c9a69eb887b25f02fe4e4715c3c73fcabdf64d7d7a0ec10d336b90376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:50 np0005605476 podman[77683]: 2026-02-02 17:20:50.467988528 +0000 UTC m=+0.029012530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: Saving service crash spec with placement *
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2275966021' entity='client.admin' 
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:50 np0005605476 podman[77683]: 2026-02-02 17:20:50.568361069 +0000 UTC m=+0.129385051 container init ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:20:50 np0005605476 podman[77683]: 2026-02-02 17:20:50.573212306 +0000 UTC m=+0.134236278 container start ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:50 np0005605476 podman[77683]: 2026-02-02 17:20:50.57584377 +0000 UTC m=+0.136867752 container attach ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:20:50 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:50 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:20:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:50 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Added label _admin to host compute-0
Feb  2 12:20:50 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb  2 12:20:50 np0005605476 great_goldstine[77714]: Added label _admin to host compute-0
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77683]: 2026-02-02 17:20:51.002385212 +0000 UTC m=+0.563409214 container died ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:20:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-827dd04c9a69eb887b25f02fe4e4715c3c73fcabdf64d7d7a0ec10d336b90376-merged.mount: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77683]: 2026-02-02 17:20:51.033031107 +0000 UTC m=+0.594055089 container remove ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466 (image=quay.io/ceph/ceph:v20, name=great_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-conmon-ec6ff564ad6095e043f2ae58bb1b6b797cab1f57e8e6069d76c2c2af7076a466.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.089841509 +0000 UTC m=+0.039510665 container create 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.104135103 +0000 UTC m=+0.033901438 container create 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:51 np0005605476 systemd[1]: Started libpod-conmon-8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b.scope.
Feb  2 12:20:51 np0005605476 systemd[1]: Started libpod-conmon-0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d.scope.
Feb  2 12:20:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6808a745e38a117b41818b6448b8d8cfefc61f549344715e9cc7603a784c78e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6808a745e38a117b41818b6448b8d8cfefc61f549344715e9cc7603a784c78e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6808a745e38a117b41818b6448b8d8cfefc61f549344715e9cc7603a784c78e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.152744464 +0000 UTC m=+0.102413640 container init 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.156109279 +0000 UTC m=+0.085875624 container init 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.157411875 +0000 UTC m=+0.107081021 container start 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.160432171 +0000 UTC m=+0.090198516 container start 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.161307645 +0000 UTC m=+0.110976831 container attach 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:20:51 np0005605476 hardcore_pare[77866]: 167 167
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.164931588 +0000 UTC m=+0.094697933 container attach 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.165244056 +0000 UTC m=+0.095010401 container died 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.07531447 +0000 UTC m=+0.024983646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.089454928 +0000 UTC m=+0.019221273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:51 np0005605476 podman[77843]: 2026-02-02 17:20:51.192111604 +0000 UTC m=+0.121877949 container remove 0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-conmon-0014b8028b9053ca64e163bc648ff1cdd2f1e93a146502e18d2124469a0ff80d.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-461d9325aef2a2b6f468851b49ac7d54462bddd1a9cc8ae35045c067baf867a1-merged.mount: Deactivated successfully.
Feb  2 12:20:51 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb  2 12:20:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151690402' entity='client.admin' 
Feb  2 12:20:51 np0005605476 vigilant_jepsen[77864]: set mgr/dashboard/cluster/status
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.783405134 +0000 UTC m=+0.733074300 container died 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6808a745e38a117b41818b6448b8d8cfefc61f549344715e9cc7603a784c78e3-merged.mount: Deactivated successfully.
Feb  2 12:20:51 np0005605476 podman[77822]: 2026-02-02 17:20:51.815495729 +0000 UTC m=+0.765164885 container remove 8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b (image=quay.io/ceph/ceph:v20, name=vigilant_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:51 np0005605476 systemd[1]: libpod-conmon-8915316ae6c4b0ec9215d78cda59eb2df345d4f06c51ad6a9a3a52c3dab84f7b.scope: Deactivated successfully.
Feb  2 12:20:51 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:51 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:51 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.234121398 +0000 UTC m=+0.044173717 container create 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:20:52 np0005605476 systemd[1]: Started libpod-conmon-7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2.scope.
Feb  2 12:20:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0cbff55f078f67220f88e9cda1f53c0dce89df9a3b60dbc9c6e111513b9382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0cbff55f078f67220f88e9cda1f53c0dce89df9a3b60dbc9c6e111513b9382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0cbff55f078f67220f88e9cda1f53c0dce89df9a3b60dbc9c6e111513b9382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.215444911 +0000 UTC m=+0.025497210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0cbff55f078f67220f88e9cda1f53c0dce89df9a3b60dbc9c6e111513b9382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.327269755 +0000 UTC m=+0.137322064 container init 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.332982246 +0000 UTC m=+0.143034525 container start 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.336397872 +0000 UTC m=+0.146450151 container attach 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: Added label _admin to host compute-0
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/151690402' entity='client.admin' 
Feb  2 12:20:52 np0005605476 python3[78011]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:52 np0005605476 podman[78017]: 2026-02-02 17:20:52.622154973 +0000 UTC m=+0.034163595 container create a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:52 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:52 np0005605476 systemd[1]: Started libpod-conmon-a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f.scope.
Feb  2 12:20:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d8340d7ebf72150bc400019c1dc5c331e4d6e87b945ab8fee47c4ddac1968/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6d8340d7ebf72150bc400019c1dc5c331e4d6e87b945ab8fee47c4ddac1968/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:52 np0005605476 podman[78017]: 2026-02-02 17:20:52.683263917 +0000 UTC m=+0.095272569 container init a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:52 np0005605476 podman[78017]: 2026-02-02 17:20:52.688756192 +0000 UTC m=+0.100764814 container start a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:20:52 np0005605476 podman[78017]: 2026-02-02 17:20:52.691702405 +0000 UTC m=+0.103711027 container attach a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:20:52 np0005605476 podman[78017]: 2026-02-02 17:20:52.609093505 +0000 UTC m=+0.021102147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]: [
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:    {
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "available": false,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "being_replaced": false,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "ceph_device_lvm": false,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "lsm_data": {},
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "lvs": [],
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "path": "/dev/sr0",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "rejected_reasons": [
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "Insufficient space (<5GB)",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "Has a FileSystem"
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        ],
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        "sys_api": {
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "actuators": null,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "device_nodes": [
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:                "sr0"
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            ],
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "devname": "sr0",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "human_readable_size": "482.00 KB",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "id_bus": "ata",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "model": "QEMU DVD-ROM",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "nr_requests": "2",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "parent": "/dev/sr0",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "partitions": {},
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "path": "/dev/sr0",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "removable": "1",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "rev": "2.5+",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "ro": "0",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "rotational": "1",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "sas_address": "",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "sas_device_handle": "",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "scheduler_mode": "mq-deadline",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "sectors": 0,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "sectorsize": "2048",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "size": 493568.0,
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "support_discard": "2048",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "type": "disk",
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:            "vendor": "QEMU"
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:        }
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]:    }
Feb  2 12:20:52 np0005605476 eloquent_lewin[77981]: ]
Feb  2 12:20:52 np0005605476 systemd[1]: libpod-7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2.scope: Deactivated successfully.
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.809350114 +0000 UTC m=+0.619402393 container died 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1a0cbff55f078f67220f88e9cda1f53c0dce89df9a3b60dbc9c6e111513b9382-merged.mount: Deactivated successfully.
Feb  2 12:20:52 np0005605476 podman[77965]: 2026-02-02 17:20:52.847843319 +0000 UTC m=+0.657895588 container remove 7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:20:52 np0005605476 systemd[1]: libpod-conmon-7e59b8e1d11787759683d77d359ac73119ef0c1aed230cf7537bfaf14f1668c2.scope: Deactivated successfully.
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:20:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:20:52 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 12:20:52 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3773902061' entity='client.admin' 
Feb  2 12:20:53 np0005605476 systemd[1]: libpod-a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f.scope: Deactivated successfully.
Feb  2 12:20:53 np0005605476 podman[78017]: 2026-02-02 17:20:53.132277283 +0000 UTC m=+0.544285915 container died a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:20:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fc6d8340d7ebf72150bc400019c1dc5c331e4d6e87b945ab8fee47c4ddac1968-merged.mount: Deactivated successfully.
Feb  2 12:20:53 np0005605476 podman[78017]: 2026-02-02 17:20:53.16300338 +0000 UTC m=+0.575012002 container remove a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f (image=quay.io/ceph/ceph:v20, name=funny_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:53 np0005605476 systemd[1]: libpod-conmon-a130edd400e7f8cb051b0c88ea93a00a72e5a118193d2bfb11ce2a5008154e7f.scope: Deactivated successfully.
Feb  2 12:20:53 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.conf
Feb  2 12:20:53 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.conf
Feb  2 12:20:53 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:53 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 12:20:53 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3773902061' entity='client.admin' 
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.conf
Feb  2 12:20:53 np0005605476 ansible-async_wrapper.py[79358]: Invoked with j976635151886 30 /home/zuul/.ansible/tmp/ansible-tmp-1770052853.501559-36574-203548260240233/AnsiballZ_command.py _
Feb  2 12:20:53 np0005605476 ansible-async_wrapper.py[79436]: Starting module and watcher
Feb  2 12:20:53 np0005605476 ansible-async_wrapper.py[79436]: Start watching 79437 (30)
Feb  2 12:20:53 np0005605476 ansible-async_wrapper.py[79437]: Start module (79437)
Feb  2 12:20:53 np0005605476 ansible-async_wrapper.py[79358]: Return async_wrapper task started.
Feb  2 12:20:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:20:54 np0005605476 python3[79438]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.client.admin.keyring
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.client.admin.keyring
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.133010913 +0000 UTC m=+0.030808650 container create 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:54 np0005605476 systemd[1]: Started libpod-conmon-0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6.scope.
Feb  2 12:20:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0449839f4591d7049b6b48e1a8be38057da8b47c360fd927b6558a565f4ed7cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0449839f4591d7049b6b48e1a8be38057da8b47c360fd927b6558a565f4ed7cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.120654814 +0000 UTC m=+0.018452571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.225684187 +0000 UTC m=+0.123481944 container init 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.230153153 +0000 UTC m=+0.127950890 container start 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.233312972 +0000 UTC m=+0.131110709 container attach 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 03e4257a-ab8b-4db7-b89d-11aa52f4ae3a (Updating crash deployment (+1 -> 1))
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 12:20:54 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:20:54 np0005605476 practical_gould[79578]: 
Feb  2 12:20:54 np0005605476 practical_gould[79578]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 12:20:54 np0005605476 systemd[1]: libpod-0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6.scope: Deactivated successfully.
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.668731795 +0000 UTC m=+0.566529552 container died 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:20:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0449839f4591d7049b6b48e1a8be38057da8b47c360fd927b6558a565f4ed7cf-merged.mount: Deactivated successfully.
Feb  2 12:20:54 np0005605476 podman[79514]: 2026-02-02 17:20:54.704681709 +0000 UTC m=+0.602479446 container remove 0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6 (image=quay.io/ceph/ceph:v20, name=practical_gould, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:20:54 np0005605476 systemd[1]: libpod-conmon-0e9d731d5c0d038512168d02969973b942eeb920562c6eec2626c9b02b1fdbb6.scope: Deactivated successfully.
Feb  2 12:20:54 np0005605476 ansible-async_wrapper.py[79437]: Module complete (79437)
Feb  2 12:20:54 np0005605476 podman[79880]: 2026-02-02 17:20:54.990960624 +0000 UTC m=+0.035818001 container create 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:55 np0005605476 systemd[1]: Started libpod-conmon-045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62.scope.
Feb  2 12:20:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:55.05531551 +0000 UTC m=+0.100172877 container init 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:55.059968071 +0000 UTC m=+0.104825448 container start 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:20:55 np0005605476 cranky_brattain[79920]: 167 167
Feb  2 12:20:55 np0005605476 systemd[1]: libpod-045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62.scope: Deactivated successfully.
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:55.06277106 +0000 UTC m=+0.107628497 container attach 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:55.063276274 +0000 UTC m=+0.108133661 container died 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:54.974516251 +0000 UTC m=+0.019373658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1bf2944ccea122d5ca59475000aa990ab5da6bae919558cf688a8d3039c9de14-merged.mount: Deactivated successfully.
Feb  2 12:20:55 np0005605476 podman[79880]: 2026-02-02 17:20:55.091765038 +0000 UTC m=+0.136622415 container remove 045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:20:55 np0005605476 systemd[1]: libpod-conmon-045e2fc4b7faf48e9c61bb7d264194b3b550314b9e73b3768f09fcf8c1a82d62.scope: Deactivated successfully.
Feb  2 12:20:55 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:55 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:55 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:55 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:55 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:55 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:55 np0005605476 python3[79998]: ansible-ansible.legacy.async_status Invoked with jid=j976635151886.79358 mode=status _async_dir=/root/.ansible_async
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: Updating compute-0:/var/lib/ceph/eb48d0ef-3496-563c-b73d-661fb962013e/config/ceph.client.admin.keyring
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: Deploying daemon crash.compute-0 on compute-0
Feb  2 12:20:55 np0005605476 systemd[1]: Starting Ceph crash.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:55 np0005605476 python3[80088]: ansible-ansible.legacy.async_status Invoked with jid=j976635151886.79358 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 12:20:55 np0005605476 podman[80135]: 2026-02-02 17:20:55.740081656 +0000 UTC m=+0.034218946 container create 43c95f4962571d6ee3a5291e1f020e4230f3e56f098995891d143922a65cac4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6daa03ef21358e51594cc97c548d0cd288d82b5051d7c14b4f297bf74e97f/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6daa03ef21358e51594cc97c548d0cd288d82b5051d7c14b4f297bf74e97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6daa03ef21358e51594cc97c548d0cd288d82b5051d7c14b4f297bf74e97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf6daa03ef21358e51594cc97c548d0cd288d82b5051d7c14b4f297bf74e97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:55 np0005605476 podman[80135]: 2026-02-02 17:20:55.782811702 +0000 UTC m=+0.076949002 container init 43c95f4962571d6ee3a5291e1f020e4230f3e56f098995891d143922a65cac4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:55 np0005605476 podman[80135]: 2026-02-02 17:20:55.788470711 +0000 UTC m=+0.082607991 container start 43c95f4962571d6ee3a5291e1f020e4230f3e56f098995891d143922a65cac4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:20:55 np0005605476 bash[80135]: 43c95f4962571d6ee3a5291e1f020e4230f3e56f098995891d143922a65cac4b
Feb  2 12:20:55 np0005605476 podman[80135]: 2026-02-02 17:20:55.725248578 +0000 UTC m=+0.019385888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:55 np0005605476 systemd[1]: Started Ceph crash.compute-0 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: INFO:ceph-crash:pinging cluster to exercise our key
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 03e4257a-ab8b-4db7-b89d-11aa52f4ae3a (Updating crash deployment (+1 -> 1))
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 03e4257a-ab8b-4db7-b89d-11aa52f4ae3a (Updating crash deployment (+1 -> 1)) in 1 seconds
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 8fa31ebc-7adf-4d57-a8c3-71f322706267 (Updating mgr deployment (+1 -> 2))
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.yqudah", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.yqudah", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yqudah", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mgr services"} : dispatch
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.yqudah on compute-0
Feb  2 12:20:55 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.yqudah on compute-0
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.928+0000 7fdd03515640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.928+0000 7fdd03515640 -1 AuthRegistry(0x7fdcfc052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.929+0000 7fdd03515640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.929+0000 7fdd03515640 -1 AuthRegistry(0x7fdd03513fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.929+0000 7fdd0128a640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: 2026-02-02T17:20:55.929+0000 7fdd03515640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb  2 12:20:55 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-crash-compute-0[80150]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb  2 12:20:56 np0005605476 python3[80242]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.316608859 +0000 UTC m=+0.032970752 container create 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:56 np0005605476 systemd[1]: Started libpod-conmon-14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de.scope.
Feb  2 12:20:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.395957197 +0000 UTC m=+0.112319150 container init 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.303295393 +0000 UTC m=+0.019657306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.401944096 +0000 UTC m=+0.118306009 container start 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.405797014 +0000 UTC m=+0.122159007 container attach 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:56 np0005605476 boring_jennings[80303]: 167 167
Feb  2 12:20:56 np0005605476 systemd[1]: libpod-14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de.scope: Deactivated successfully.
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.407348458 +0000 UTC m=+0.123710371 container died 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:20:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4d8b89f26428c9cc717588155712ceeced064d76611bfaf9af92ba2c93731660-merged.mount: Deactivated successfully.
Feb  2 12:20:56 np0005605476 podman[80286]: 2026-02-02 17:20:56.438045904 +0000 UTC m=+0.154407807 container remove 14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_jennings, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:20:56 np0005605476 systemd[1]: libpod-conmon-14b8d817110b8ecd4efb83923639b4fd53df885dd1f87033b45e7f587db1d8de.scope: Deactivated successfully.
Feb  2 12:20:56 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:56 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:56 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:56 np0005605476 ceph-mgr[75493]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb  2 12:20:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 12:20:56 np0005605476 systemd[1]: Reloading.
Feb  2 12:20:56 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:20:56 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.yqudah", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yqudah", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: Deploying daemon mgr.compute-0.yqudah on compute-0
Feb  2 12:20:56 np0005605476 ceph-mon[75197]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 12:20:56 np0005605476 python3[80382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:56 np0005605476 podman[80422]: 2026-02-02 17:20:56.933191922 +0000 UTC m=+0.054949701 container create 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:56 np0005605476 systemd[1]: Started libpod-conmon-7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8.scope.
Feb  2 12:20:56 np0005605476 systemd[1]: Starting Ceph mgr.compute-0.yqudah for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:20:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a83a077b073d2e8c58cb2b8d313119803c1ff7a1288661d55a88c5503496f04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a83a077b073d2e8c58cb2b8d313119803c1ff7a1288661d55a88c5503496f04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a83a077b073d2e8c58cb2b8d313119803c1ff7a1288661d55a88c5503496f04/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:56.911687865 +0000 UTC m=+0.033445674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:57.013925209 +0000 UTC m=+0.135682998 container init 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:57.021508593 +0000 UTC m=+0.143266352 container start 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:57.024466986 +0000 UTC m=+0.146224745 container attach 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:57 np0005605476 podman[80506]: 2026-02-02 17:20:57.160274697 +0000 UTC m=+0.034165804 container create 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:20:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d52903197d7ee38f5a1d66bc6fd1531b4da6c59b04e7eda07244e6ec803be7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d52903197d7ee38f5a1d66bc6fd1531b4da6c59b04e7eda07244e6ec803be7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d52903197d7ee38f5a1d66bc6fd1531b4da6c59b04e7eda07244e6ec803be7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d52903197d7ee38f5a1d66bc6fd1531b4da6c59b04e7eda07244e6ec803be7/merged/var/lib/ceph/mgr/ceph-compute-0.yqudah supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:57 np0005605476 podman[80506]: 2026-02-02 17:20:57.237508056 +0000 UTC m=+0.111399183 container init 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:20:57 np0005605476 podman[80506]: 2026-02-02 17:20:57.14265413 +0000 UTC m=+0.016545247 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:20:57 np0005605476 podman[80506]: 2026-02-02 17:20:57.243721941 +0000 UTC m=+0.117613048 container start 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:57 np0005605476 bash[80506]: 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966
Feb  2 12:20:57 np0005605476 systemd[1]: Started Ceph mgr.compute-0.yqudah for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: pidfile_write: ignore empty --pid-file
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:57 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 8fa31ebc-7adf-4d57-a8c3-71f322706267 (Updating mgr deployment (+1 -> 2))
Feb  2 12:20:57 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 8fa31ebc-7adf-4d57-a8c3-71f322706267 (Updating mgr deployment (+1 -> 2)) in 1 seconds
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'alerts'
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'balancer'
Feb  2 12:20:57 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:20:57 np0005605476 zealous_curie[80439]: 
Feb  2 12:20:57 np0005605476 zealous_curie[80439]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 12:20:57 np0005605476 systemd[1]: libpod-7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8.scope: Deactivated successfully.
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:57.473318868 +0000 UTC m=+0.595076647 container died 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:20:57 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 2 completed events
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:20:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:57 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8a83a077b073d2e8c58cb2b8d313119803c1ff7a1288661d55a88c5503496f04-merged.mount: Deactivated successfully.
Feb  2 12:20:57 np0005605476 podman[80422]: 2026-02-02 17:20:57.513121771 +0000 UTC m=+0.634879540 container remove 7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8 (image=quay.io/ceph/ceph:v20, name=zealous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:57 np0005605476 systemd[1]: libpod-conmon-7cd80f0a4cabd293ecb06aa9306a08927aa3dadaf0916b88216bdaca2f438ae8.scope: Deactivated successfully.
Feb  2 12:20:57 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'cephadm'
Feb  2 12:20:57 np0005605476 podman[80704]: 2026-02-02 17:20:57.89664609 +0000 UTC m=+0.059765717 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:20:57 np0005605476 python3[80702]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:57 np0005605476 podman[80704]: 2026-02-02 17:20:57.991411183 +0000 UTC m=+0.154530780 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.01188081 +0000 UTC m=+0.046018419 container create f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:20:58 np0005605476 systemd[1]: Started libpod-conmon-f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c.scope.
Feb  2 12:20:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e6a5d0e0d6632fd76ab3f6a27694cec84719a7b3b8a7c74b1b62064928baf4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e6a5d0e0d6632fd76ab3f6a27694cec84719a7b3b8a7c74b1b62064928baf4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e6a5d0e0d6632fd76ab3f6a27694cec84719a7b3b8a7c74b1b62064928baf4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:57.988525831 +0000 UTC m=+0.022663460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.103219177 +0000 UTC m=+0.137356876 container init f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.111639554 +0000 UTC m=+0.145777193 container start f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.118338293 +0000 UTC m=+0.152475992 container attach f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:20:58 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'crash'
Feb  2 12:20:58 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'dashboard'
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3572231907' entity='client.admin' 
Feb  2 12:20:58 np0005605476 systemd[1]: libpod-f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c.scope: Deactivated successfully.
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.553761006 +0000 UTC m=+0.587898625 container died f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-82e6a5d0e0d6632fd76ab3f6a27694cec84719a7b3b8a7c74b1b62064928baf4-merged.mount: Deactivated successfully.
Feb  2 12:20:58 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 12:20:58 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:58 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 12:20:58 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 12:20:58 np0005605476 podman[80735]: 2026-02-02 17:20:58.591879261 +0000 UTC m=+0.626016870 container remove f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c (image=quay.io/ceph/ceph:v20, name=elated_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:58 np0005605476 systemd[1]: libpod-conmon-f01aca1e1ac100e8ee2ae50f99eb8e3632fac971aa76b48335a8a806b7777b1c.scope: Deactivated successfully.
Feb  2 12:20:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:20:58 np0005605476 python3[80974]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:58 np0005605476 podman[80975]: 2026-02-02 17:20:58.880688798 +0000 UTC m=+0.035810471 container create d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:20:58 np0005605476 systemd[1]: Started libpod-conmon-d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3.scope.
Feb  2 12:20:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56d098ae439ecdbbb922eb9bef06e31ae27cab7c64dd1e88133c15134c784262/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56d098ae439ecdbbb922eb9bef06e31ae27cab7c64dd1e88133c15134c784262/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56d098ae439ecdbbb922eb9bef06e31ae27cab7c64dd1e88133c15134c784262/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:58 np0005605476 podman[80975]: 2026-02-02 17:20:58.955257332 +0000 UTC m=+0.110379015 container init d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:58 np0005605476 ansible-async_wrapper.py[79436]: Done in kid B.
Feb  2 12:20:58 np0005605476 podman[80975]: 2026-02-02 17:20:58.862506106 +0000 UTC m=+0.017627789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:58 np0005605476 podman[80975]: 2026-02-02 17:20:58.960728406 +0000 UTC m=+0.115850079 container start d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:20:58 np0005605476 podman[80975]: 2026-02-02 17:20:58.970454921 +0000 UTC m=+0.125576604 container attach d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:20:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.002894006 +0000 UTC m=+0.045318280 container create 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:20:59 np0005605476 systemd[1]: Started libpod-conmon-6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f.scope.
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'devicehealth'
Feb  2 12:20:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.070157573 +0000 UTC m=+0.112581937 container init 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.075209526 +0000 UTC m=+0.117633810 container start 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:20:59 np0005605476 dreamy_yalow[81026]: 167 167
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 conmon[81026]: conmon 6661e83311528e97dac3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f.scope/container/memory.events
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.079044034 +0000 UTC m=+0.121468398 container attach 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.079406464 +0000 UTC m=+0.121830778 container died 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:58.985735782 +0000 UTC m=+0.028160106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f8a7ceb48432d7b37908b6a45af472ffd0baf1bf12044d1cf98c5ff1428c1579-merged.mount: Deactivated successfully.
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 12:20:59 np0005605476 podman[81009]: 2026-02-02 17:20:59.113246459 +0000 UTC m=+0.155670733 container remove 6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f (image=quay.io/ceph/ceph:v20, name=dreamy_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-conmon-6661e83311528e97dac37e87e9b43e7bf2026e584009cae73a451188fc074b4f.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hccdnu (unknown last config time)...
Feb  2 12:20:59 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hccdnu (unknown last config time)...
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hccdnu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.hccdnu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mgr services"} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hccdnu on compute-0
Feb  2 12:20:59 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hccdnu on compute-0
Feb  2 12:20:59 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah[80522]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 12:20:59 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah[80522]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 12:20:59 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah[80522]:  from numpy import show_config as show_numpy_config
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'influx'
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'insights'
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3391784642' entity='client.admin' 
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 podman[80975]: 2026-02-02 17:20:59.392762653 +0000 UTC m=+0.547884316 container died d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:20:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-56d098ae439ecdbbb922eb9bef06e31ae27cab7c64dd1e88133c15134c784262-merged.mount: Deactivated successfully.
Feb  2 12:20:59 np0005605476 podman[80975]: 2026-02-02 17:20:59.425400984 +0000 UTC m=+0.580522647 container remove d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3 (image=quay.io/ceph/ceph:v20, name=elegant_bouman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'iostat'
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-conmon-d32d989528c63f89b8bb0ea3ff1f4c1a9a81e98813b23e70c0ea7ac7a0f3aae3.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3572231907' entity='client.admin' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.hccdnu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3391784642' entity='client.admin' 
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'k8sevents'
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.543026681 +0000 UTC m=+0.038645661 container create 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:20:59 np0005605476 systemd[1]: Started libpod-conmon-220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a.scope.
Feb  2 12:20:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.607659134 +0000 UTC m=+0.103278134 container init 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.613376736 +0000 UTC m=+0.108995716 container start 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:20:59 np0005605476 vigilant_robinson[81177]: 167 167
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.616819073 +0000 UTC m=+0.112438083 container attach 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.617467911 +0000 UTC m=+0.113086891 container died 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.526959358 +0000 UTC m=+0.022578388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8aac17ec718b62f0933ef93794baf46d39d7d9ca18a466ad0c883c6d681c48a8-merged.mount: Deactivated successfully.
Feb  2 12:20:59 np0005605476 podman[81140]: 2026-02-02 17:20:59.651143821 +0000 UTC m=+0.146762801 container remove 220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a (image=quay.io/ceph/ceph:v20, name=vigilant_robinson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:20:59 np0005605476 systemd[1]: libpod-conmon-220cf4a0808c2b92c82a660bdbd3eb4b10b34b52ab3363dc17447f13ad89a81a.scope: Deactivated successfully.
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:20:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:20:59 np0005605476 python3[81185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:20:59 np0005605476 podman[81226]: 2026-02-02 17:20:59.811383971 +0000 UTC m=+0.043677533 container create 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:20:59 np0005605476 systemd[1]: Started libpod-conmon-4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808.scope.
Feb  2 12:20:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:20:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36780e290b75059626674acf831eb52e088054e6e39f58495b074fbc93a91145/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36780e290b75059626674acf831eb52e088054e6e39f58495b074fbc93a91145/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36780e290b75059626674acf831eb52e088054e6e39f58495b074fbc93a91145/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:20:59 np0005605476 podman[81226]: 2026-02-02 17:20:59.864277993 +0000 UTC m=+0.096571585 container init 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:20:59 np0005605476 podman[81226]: 2026-02-02 17:20:59.867984328 +0000 UTC m=+0.100277890 container start 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:20:59 np0005605476 podman[81226]: 2026-02-02 17:20:59.870935491 +0000 UTC m=+0.103229063 container attach 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:20:59 np0005605476 podman[81226]: 2026-02-02 17:20:59.797390017 +0000 UTC m=+0.029683609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'localpool'
Feb  2 12:20:59 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 12:21:00 np0005605476 podman[81330]: 2026-02-02 17:21:00.178190169 +0000 UTC m=+0.047479361 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'mirroring'
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3625052206' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  2 12:21:00 np0005605476 podman[81330]: 2026-02-02 17:21:00.259390329 +0000 UTC m=+0.128679491 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'nfs'
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: Reconfiguring mgr.compute-0.hccdnu (unknown last config time)...
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: Reconfiguring daemon mgr.compute-0.hccdnu on compute-0
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3625052206' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'orchestrator'
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3625052206' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb  2 12:21:00 np0005605476 goofy_jones[81264]: set require_min_compat_client to mimic
Feb  2 12:21:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb  2 12:21:00 np0005605476 systemd[1]: libpod-4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808.scope: Deactivated successfully.
Feb  2 12:21:00 np0005605476 podman[81226]: 2026-02-02 17:21:00.723253554 +0000 UTC m=+0.955547116 container died 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-36780e290b75059626674acf831eb52e088054e6e39f58495b074fbc93a91145-merged.mount: Deactivated successfully.
Feb  2 12:21:00 np0005605476 podman[81226]: 2026-02-02 17:21:00.769388036 +0000 UTC m=+1.001681598 container remove 4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808 (image=quay.io/ceph/ceph:v20, name=goofy_jones, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 12:21:00 np0005605476 systemd[1]: libpod-conmon-4f979c9f588467d57d7808b981eb0b74cbed22af8f702c636f85f6bfb67f5808.scope: Deactivated successfully.
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'osd_support'
Feb  2 12:21:00 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 12:21:01 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'progress'
Feb  2 12:21:01 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'prometheus'
Feb  2 12:21:01 np0005605476 python3[81510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:01 np0005605476 podman[81511]: 2026-02-02 17:21:01.382968134 +0000 UTC m=+0.053128250 container create 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:01 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'rbd_support'
Feb  2 12:21:01 np0005605476 systemd[1]: Started libpod-conmon-8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b.scope.
Feb  2 12:21:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5546032e773a51b2978e212827be9ec898d6faca04d9a337d1c6a484bceaf646/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5546032e773a51b2978e212827be9ec898d6faca04d9a337d1c6a484bceaf646/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5546032e773a51b2978e212827be9ec898d6faca04d9a337d1c6a484bceaf646/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:01 np0005605476 podman[81511]: 2026-02-02 17:21:01.36474123 +0000 UTC m=+0.034901376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:01 np0005605476 podman[81511]: 2026-02-02 17:21:01.461797548 +0000 UTC m=+0.131957714 container init 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:01 np0005605476 podman[81511]: 2026-02-02 17:21:01.467318933 +0000 UTC m=+0.137479059 container start 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:21:01 np0005605476 podman[81511]: 2026-02-02 17:21:01.470772041 +0000 UTC m=+0.140932177 container attach 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb  2 12:21:01 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'rgw'
Feb  2 12:21:01 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:01 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3625052206' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 12:21:01 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'rook'
Feb  2 12:21:01 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'selftest'
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Added host compute-0
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 competent_goldwasser[81527]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 12:21:02 np0005605476 competent_goldwasser[81527]: Scheduled mon update...
Feb  2 12:21:02 np0005605476 competent_goldwasser[81527]: Scheduled mgr update...
Feb  2 12:21:02 np0005605476 competent_goldwasser[81527]: Scheduled osd.default_drive_group update...
Feb  2 12:21:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 8388b00d-9f67-48d6-b45d-ef6d177d1866 (Updating mgr deployment (-1 -> 1))
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.yqudah from compute-0 -- ports [8765]
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.yqudah from compute-0 -- ports [8765]
Feb  2 12:21:02 np0005605476 systemd[1]: libpod-8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b.scope: Deactivated successfully.
Feb  2 12:21:02 np0005605476 podman[81511]: 2026-02-02 17:21:02.304564571 +0000 UTC m=+0.974724687 container died 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:21:02 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'smb'
Feb  2 12:21:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5546032e773a51b2978e212827be9ec898d6faca04d9a337d1c6a484bceaf646-merged.mount: Deactivated successfully.
Feb  2 12:21:02 np0005605476 podman[81511]: 2026-02-02 17:21:02.339070215 +0000 UTC m=+1.009230331 container remove 8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b (image=quay.io/ceph/ceph:v20, name=competent_goldwasser, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:21:02 np0005605476 systemd[1]: libpod-conmon-8bbed4319b522aa123ce97ba0bf94e78451b474a6067a0f0c55e956334ff928b.scope: Deactivated successfully.
Feb  2 12:21:02 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'snap_schedule'
Feb  2 12:21:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:02 np0005605476 systemd[1]: Stopping Ceph mgr.compute-0.yqudah for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:21:02 np0005605476 python3[81709]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:02 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'stats'
Feb  2 12:21:02 np0005605476 podman[81729]: 2026-02-02 17:21:02.751900468 +0000 UTC m=+0.045794471 container create 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:02 np0005605476 ceph-mgr[80526]: mgr[py] Loading python module 'status'
Feb  2 12:21:02 np0005605476 systemd[1]: Started libpod-conmon-0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f.scope.
Feb  2 12:21:02 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dcc507ceddb685edd4a2eadb9720082a43799d8add3b539f3251c76627d6ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dcc507ceddb685edd4a2eadb9720082a43799d8add3b539f3251c76627d6ee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56dcc507ceddb685edd4a2eadb9720082a43799d8add3b539f3251c76627d6ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:02 np0005605476 podman[81729]: 2026-02-02 17:21:02.729231332 +0000 UTC m=+0.023125315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:02 np0005605476 podman[81729]: 2026-02-02 17:21:02.845398771 +0000 UTC m=+0.139292734 container init 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:02 np0005605476 podman[81729]: 2026-02-02 17:21:02.851258341 +0000 UTC m=+0.145152304 container start 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:21:02 np0005605476 podman[81729]: 2026-02-02 17:21:02.854835622 +0000 UTC m=+0.148729585 container attach 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:02 np0005605476 podman[81768]: 2026-02-02 17:21:02.895107808 +0000 UTC m=+0.086847431 container died 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:21:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b7d52903197d7ee38f5a1d66bc6fd1531b4da6c59b04e7eda07244e6ec803be7-merged.mount: Deactivated successfully.
Feb  2 12:21:02 np0005605476 podman[81768]: 2026-02-02 17:21:02.935317393 +0000 UTC m=+0.127057016 container remove 2eb8cef2baa0bc45b8a8738654032b15cadd3a40032d682221a76352b4e45966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:02 np0005605476 bash[81768]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-yqudah
Feb  2 12:21:02 np0005605476 systemd[1]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mgr.compute-0.yqudah.service: Main process exited, code=exited, status=143/n/a
Feb  2 12:21:03 np0005605476 systemd[1]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mgr.compute-0.yqudah.service: Failed with result 'exit-code'.
Feb  2 12:21:03 np0005605476 systemd[1]: Stopped Ceph mgr.compute-0.yqudah for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:21:03 np0005605476 systemd[1]: ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mgr.compute-0.yqudah.service: Consumed 6.321s CPU time, 401.4M memory peak, read 0B from disk, written 695.5K to disk.
Feb  2 12:21:03 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:03 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:03 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Added host compute-0
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Saving service mon spec with placement compute-0
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Saving service mgr spec with placement compute-0
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Saving service osd.default_drive_group spec with placement compute-0
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: Removing daemon mgr.compute-0.yqudah from compute-0 -- ports [8765]
Feb  2 12:21:03 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.yqudah
Feb  2 12:21:03 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.yqudah
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.yqudah"} v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.yqudah"} : dispatch
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.yqudah"}]': finished
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 8388b00d-9f67-48d6-b45d-ef6d177d1866 (Updating mgr deployment (-1 -> 1))
Feb  2 12:21:03 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 8388b00d-9f67-48d6-b45d-ef6d177d1866 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079998938' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:21:03 np0005605476 priceless_engelbart[81767]: 
Feb  2 12:21:03 np0005605476 priceless_engelbart[81767]: {"fsid":"eb48d0ef-3496-563c-b73d-661fb962013e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":44,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T17:20:17:007731+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T17:20:17.009633+0000","services":{}},"progress_events":{"8388b00d-9f67-48d6-b45d-ef6d177d1866":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:03 np0005605476 systemd[1]: libpod-0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f.scope: Deactivated successfully.
Feb  2 12:21:03 np0005605476 podman[81729]: 2026-02-02 17:21:03.36598751 +0000 UTC m=+0.659881473 container died 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-56dcc507ceddb685edd4a2eadb9720082a43799d8add3b539f3251c76627d6ee-merged.mount: Deactivated successfully.
Feb  2 12:21:03 np0005605476 podman[81729]: 2026-02-02 17:21:03.398388082 +0000 UTC m=+0.692282045 container remove 0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f (image=quay.io/ceph/ceph:v20, name=priceless_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:21:03 np0005605476 systemd[1]: libpod-conmon-0e85a941b59acb47f37e66776067068883165cde737f6534805158ea1e24784f.scope: Deactivated successfully.
Feb  2 12:21:03 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.718796321 +0000 UTC m=+0.044475068 container create cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:21:03 np0005605476 systemd[1]: Started libpod-conmon-cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114.scope.
Feb  2 12:21:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.694849683 +0000 UTC m=+0.020528430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.793166408 +0000 UTC m=+0.118845135 container init cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.800433632 +0000 UTC m=+0.126112339 container start cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.804309818 +0000 UTC m=+0.129988695 container attach cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 12:21:03 np0005605476 amazing_driscoll[81985]: 167 167
Feb  2 12:21:03 np0005605476 systemd[1]: libpod-cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114.scope: Deactivated successfully.
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.8061946 +0000 UTC m=+0.131873327 container died cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6abf7648221c5d51edf14d9ef80de49898f266aae54e0fd568b0abaed0b02972-merged.mount: Deactivated successfully.
Feb  2 12:21:03 np0005605476 podman[81969]: 2026-02-02 17:21:03.841338079 +0000 UTC m=+0.167016786 container remove cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:21:03 np0005605476 systemd[1]: libpod-conmon-cffadca00958708740d88d80e9954b65ebe543dbce9ad6f2da72dacd994b6114.scope: Deactivated successfully.
Feb  2 12:21:03 np0005605476 podman[82009]: 2026-02-02 17:21:03.951187211 +0000 UTC m=+0.035819302 container create a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:03 np0005605476 systemd[1]: Started libpod-conmon-a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960.scope.
Feb  2 12:21:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:04 np0005605476 podman[82009]: 2026-02-02 17:21:04.028944626 +0000 UTC m=+0.113576727 container init a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:04 np0005605476 podman[82009]: 2026-02-02 17:21:03.934613598 +0000 UTC m=+0.019245719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:04 np0005605476 podman[82009]: 2026-02-02 17:21:04.033473763 +0000 UTC m=+0.118105864 container start a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:04 np0005605476 podman[82009]: 2026-02-02 17:21:04.036603856 +0000 UTC m=+0.121235967 container attach a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: Removing key for mgr.compute-0.yqudah
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.yqudah"} : dispatch
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.yqudah"}]': finished
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:04 np0005605476 sharp_proskuriakova[82026]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:21:04 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:04 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:04 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new eaf642f2-cfb0-43d5-aab5-31b940552369
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "eaf642f2-cfb0-43d5-aab5-31b940552369"} v 0)
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2442012740' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "eaf642f2-cfb0-43d5-aab5-31b940552369"} : dispatch
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2442012740' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eaf642f2-cfb0-43d5-aab5-31b940552369"}]': finished
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:05 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb  2 12:21:05 np0005605476 lvm[82120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:05 np0005605476 lvm[82120]: VG ceph_vg0 finished
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2442012740' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "eaf642f2-cfb0-43d5-aab5-31b940552369"} : dispatch
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2442012740' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eaf642f2-cfb0-43d5-aab5-31b940552369"}]': finished
Feb  2 12:21:05 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 12:21:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446493601' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: stderr: got monmap epoch 1
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: --> Creating keyring file for osd.0
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb  2 12:21:05 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid eaf642f2-cfb0-43d5-aab5-31b940552369 --setuser ceph --setgroup ceph
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:05.866+0000 7f916209e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:05.887+0000 7f916209e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb  2 12:21:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:06 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 694d1bf9-7846-44e5-9a03-71f88deec6dd
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "694d1bf9-7846-44e5-9a03-71f88deec6dd"} v 0)
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293689055' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "694d1bf9-7846-44e5-9a03-71f88deec6dd"} : dispatch
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293689055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "694d1bf9-7846-44e5-9a03-71f88deec6dd"}]': finished
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:07 np0005605476 lvm[83065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:07 np0005605476 lvm[83065]: VG ceph_vg1 finished
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/293689055' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "694d1bf9-7846-44e5-9a03-71f88deec6dd"} : dispatch
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/293689055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "694d1bf9-7846-44e5-9a03-71f88deec6dd"}]': finished
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 3 completed events
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 12:21:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985839681' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: stderr: got monmap epoch 1
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: --> Creating keyring file for osd.1
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb  2 12:21:07 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 694d1bf9-7846-44e5-9a03-71f88deec6dd --setuser ceph --setgroup ceph
Feb  2 12:21:08 np0005605476 ceph-mon[75197]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 12:21:08 np0005605476 ceph-mon[75197]: Cluster is now healthy
Feb  2 12:21:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:07.974+0000 7f9a992518c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:07.998+0000 7f9a992518c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb  2 12:21:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:08 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5
Feb  2 12:21:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5"} v 0)
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3172668205' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5"} : dispatch
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3172668205' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5"}]': finished
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:09 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:09 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:09 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3172668205' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5"} : dispatch
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3172668205' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5"}]': finished
Feb  2 12:21:09 np0005605476 lvm[84010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:09 np0005605476 lvm[84010]: VG ceph_vg2 finished
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb  2 12:21:09 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 12:21:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604261143' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: stderr: got monmap epoch 1
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: --> Creating keyring file for osd.2
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb  2 12:21:09 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5 --setuser ceph --setgroup ceph
Feb  2 12:21:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:09.997+0000 7f08f885a8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: stderr: 2026-02-02T17:21:10.018+0000 7f08f885a8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 12:21:10 np0005605476 sharp_proskuriakova[82026]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb  2 12:21:10 np0005605476 systemd[1]: libpod-a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960.scope: Deactivated successfully.
Feb  2 12:21:10 np0005605476 systemd[1]: libpod-a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960.scope: Consumed 5.473s CPU time.
Feb  2 12:21:10 np0005605476 podman[82009]: 2026-02-02 17:21:10.960554755 +0000 UTC m=+7.045186876 container died a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:21:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5e3fb420357c51be1cce7c813b2b3bcb29cf81056d95dfb4720342a69f072a6c-merged.mount: Deactivated successfully.
Feb  2 12:21:11 np0005605476 podman[82009]: 2026-02-02 17:21:11.008149836 +0000 UTC m=+7.092781937 container remove a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:21:11 np0005605476 systemd[1]: libpod-conmon-a35933740d54f674e0e6f09bccf95811911f07c7d47303c5a524c3a2535c9960.scope: Deactivated successfully.
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.473717228 +0000 UTC m=+0.049379652 container create 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:21:11 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:11 np0005605476 systemd[1]: Started libpod-conmon-324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d.scope.
Feb  2 12:21:11 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.446913232 +0000 UTC m=+0.022575726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.55243407 +0000 UTC m=+0.128096494 container init 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.559937707 +0000 UTC m=+0.135600111 container start 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:21:11 np0005605476 angry_nightingale[85025]: 167 167
Feb  2 12:21:11 np0005605476 systemd[1]: libpod-324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d.scope: Deactivated successfully.
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.563975486 +0000 UTC m=+0.139637910 container attach 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.564259581 +0000 UTC m=+0.139921975 container died 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-91c22af6f8919b255292b00127d87a4f866b2008b1d3c9b0792f7df7044060a7-merged.mount: Deactivated successfully.
Feb  2 12:21:11 np0005605476 podman[85008]: 2026-02-02 17:21:11.602316819 +0000 UTC m=+0.177979243 container remove 324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:21:11 np0005605476 systemd[1]: libpod-conmon-324c59b23fdcb38cff3014c5bb9dc0c3e9347a5bc70758f30ecacd654089da8d.scope: Deactivated successfully.
Feb  2 12:21:11 np0005605476 podman[85048]: 2026-02-02 17:21:11.731079503 +0000 UTC m=+0.031793892 container create c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:11 np0005605476 systemd[1]: Started libpod-conmon-c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594.scope.
Feb  2 12:21:11 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ecc785d095c64991b039fe5ee1bb1c1e95c9ccee77d56f93e426caff61c4b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ecc785d095c64991b039fe5ee1bb1c1e95c9ccee77d56f93e426caff61c4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ecc785d095c64991b039fe5ee1bb1c1e95c9ccee77d56f93e426caff61c4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ecc785d095c64991b039fe5ee1bb1c1e95c9ccee77d56f93e426caff61c4b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:11 np0005605476 podman[85048]: 2026-02-02 17:21:11.80542173 +0000 UTC m=+0.106136159 container init c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:11 np0005605476 podman[85048]: 2026-02-02 17:21:11.811140767 +0000 UTC m=+0.111855156 container start c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:21:11 np0005605476 podman[85048]: 2026-02-02 17:21:11.716400103 +0000 UTC m=+0.017114502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:11 np0005605476 podman[85048]: 2026-02-02 17:21:11.814557626 +0000 UTC m=+0.115272095 container attach c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]: {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    "0": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "devices": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "/dev/loop3"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            ],
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_name": "ceph_lv0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_size": "21470642176",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "name": "ceph_lv0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "tags": {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.crush_device_class": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.encrypted": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_id": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.vdo": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.with_tpm": "0"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            },
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "vg_name": "ceph_vg0"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        }
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    ],
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    "1": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "devices": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "/dev/loop4"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            ],
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_name": "ceph_lv1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_size": "21470642176",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "name": "ceph_lv1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "tags": {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.crush_device_class": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.encrypted": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_id": "1",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.vdo": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.with_tpm": "0"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            },
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "vg_name": "ceph_vg1"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        }
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    ],
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    "2": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "devices": [
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "/dev/loop5"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            ],
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_name": "ceph_lv2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_size": "21470642176",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "name": "ceph_lv2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "tags": {
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.crush_device_class": "",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.encrypted": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osd_id": "2",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.vdo": "0",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:                "ceph.with_tpm": "0"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            },
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "type": "block",
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:            "vg_name": "ceph_vg2"
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:        }
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]:    ]
Feb  2 12:21:12 np0005605476 nostalgic_buck[85065]: }
Feb  2 12:21:12 np0005605476 systemd[1]: libpod-c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594.scope: Deactivated successfully.
Feb  2 12:21:12 np0005605476 podman[85048]: 2026-02-02 17:21:12.097473996 +0000 UTC m=+0.398188385 container died c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb  2 12:21:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-38ecc785d095c64991b039fe5ee1bb1c1e95c9ccee77d56f93e426caff61c4b6-merged.mount: Deactivated successfully.
Feb  2 12:21:12 np0005605476 podman[85048]: 2026-02-02 17:21:12.131265082 +0000 UTC m=+0.431979461 container remove c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_buck, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 12:21:12 np0005605476 systemd[1]: libpod-conmon-c358f2f9b42f18dff5f0099725b12cc7cf1ea96496b5d9b8c00eea7ad2e52594.scope: Deactivated successfully.
Feb  2 12:21:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb  2 12:21:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  2 12:21:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:12 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb  2 12:21:12 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb  2 12:21:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  2 12:21:12 np0005605476 podman[85177]: 2026-02-02 17:21:12.60481004 +0000 UTC m=+0.035920243 container create e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:21:12 np0005605476 systemd[1]: Started libpod-conmon-e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e.scope.
Feb  2 12:21:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:12 np0005605476 podman[85177]: 2026-02-02 17:21:12.673017852 +0000 UTC m=+0.104128055 container init e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:21:12 np0005605476 podman[85177]: 2026-02-02 17:21:12.677789163 +0000 UTC m=+0.108899356 container start e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:21:12 np0005605476 podman[85177]: 2026-02-02 17:21:12.68049138 +0000 UTC m=+0.111601583 container attach e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:12 np0005605476 relaxed_pasteur[85193]: 167 167
Feb  2 12:21:12 np0005605476 systemd[1]: libpod-e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e.scope: Deactivated successfully.
Feb  2 12:21:12 np0005605476 podman[85177]: 2026-02-02 17:21:12.590013868 +0000 UTC m=+0.021124091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:12 np0005605476 podman[85198]: 2026-02-02 17:21:12.708498687 +0000 UTC m=+0.018775351 container died e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:21:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-38ac1f5211f2f5eaf0271ac7071b19c70d0e335735ef745b1ba18e5e96574894-merged.mount: Deactivated successfully.
Feb  2 12:21:12 np0005605476 podman[85198]: 2026-02-02 17:21:12.739846441 +0000 UTC m=+0.050123115 container remove e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:21:12 np0005605476 systemd[1]: libpod-conmon-e8dcd9a91cbde7aafea2d5fbe5a518692d70fbfcfe97386547fb593d46e7ca1e.scope: Deactivated successfully.
Feb  2 12:21:12 np0005605476 podman[85227]: 2026-02-02 17:21:12.908212239 +0000 UTC m=+0.031491797 container create c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:21:12 np0005605476 systemd[1]: Started libpod-conmon-c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce.scope.
Feb  2 12:21:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:12 np0005605476 podman[85227]: 2026-02-02 17:21:12.894706449 +0000 UTC m=+0.017986027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:12 np0005605476 podman[85227]: 2026-02-02 17:21:12.99746263 +0000 UTC m=+0.120742218 container init c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:13 np0005605476 podman[85227]: 2026-02-02 17:21:13.007306578 +0000 UTC m=+0.130586136 container start c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:13 np0005605476 podman[85227]: 2026-02-02 17:21:13.01097042 +0000 UTC m=+0.134250058 container attach c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:13 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test[85244]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 12:21:13 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test[85244]:                            [--no-systemd] [--no-tmpfs]
Feb  2 12:21:13 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test[85244]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 12:21:13 np0005605476 systemd[1]: libpod-c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce.scope: Deactivated successfully.
Feb  2 12:21:13 np0005605476 podman[85227]: 2026-02-02 17:21:13.215604447 +0000 UTC m=+0.338884005 container died c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3fd3671c535784b0816ee1030cf288a8d363a0509b7822f9105e3d9b43021619-merged.mount: Deactivated successfully.
Feb  2 12:21:13 np0005605476 podman[85227]: 2026-02-02 17:21:13.251645701 +0000 UTC m=+0.374925259 container remove c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:21:13 np0005605476 systemd[1]: libpod-conmon-c079f5758406746e3802151fe39b33d889408dfa30786efce14a196c1796dbce.scope: Deactivated successfully.
Feb  2 12:21:13 np0005605476 ceph-mon[75197]: Deploying daemon osd.0 on compute-0
Feb  2 12:21:13 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:13 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:13 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:13 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:13 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:13 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:13 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:13 np0005605476 systemd[1]: Starting Ceph osd.0 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:21:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:14 np0005605476 podman[85405]: 2026-02-02 17:21:14.080576913 +0000 UTC m=+0.033178636 container create 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:14 np0005605476 podman[85405]: 2026-02-02 17:21:14.148265987 +0000 UTC m=+0.100867730 container init 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:21:14 np0005605476 podman[85405]: 2026-02-02 17:21:14.154084746 +0000 UTC m=+0.106686459 container start 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:21:14 np0005605476 podman[85405]: 2026-02-02 17:21:14.157715728 +0000 UTC m=+0.110317441 container attach 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:14 np0005605476 podman[85405]: 2026-02-02 17:21:14.06513687 +0000 UTC m=+0.017738603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:14 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 bash[85405]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 bash[85405]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:14 np0005605476 lvm[85504]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:14 np0005605476 lvm[85504]: VG ceph_vg0 finished
Feb  2 12:21:14 np0005605476 lvm[85507]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:14 np0005605476 lvm[85507]: VG ceph_vg1 finished
Feb  2 12:21:14 np0005605476 lvm[85509]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:14 np0005605476 lvm[85509]: VG ceph_vg2 finished
Feb  2 12:21:14 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:14 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 bash[85405]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:14 np0005605476 bash[85405]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:14 np0005605476 bash[85405]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:15 np0005605476 bash[85405]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate[85421]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 12:21:15 np0005605476 bash[85405]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 12:21:15 np0005605476 systemd[1]: libpod-5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d.scope: Deactivated successfully.
Feb  2 12:21:15 np0005605476 podman[85405]: 2026-02-02 17:21:15.153992122 +0000 UTC m=+1.106593865 container died 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:15 np0005605476 systemd[1]: libpod-5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d.scope: Consumed 1.240s CPU time.
Feb  2 12:21:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2c499ac2ddb5231e63e5a40e278385df71236c476f4b06bbef9bd5bb7743a7e3-merged.mount: Deactivated successfully.
Feb  2 12:21:15 np0005605476 podman[85405]: 2026-02-02 17:21:15.192186463 +0000 UTC m=+1.144788176 container remove 5369af12a1fae4ee88b5dfb40c215bd0c2d9efe054bd4e4072739b253001142d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:21:15 np0005605476 podman[85677]: 2026-02-02 17:21:15.314370365 +0000 UTC m=+0.027542570 container create 5ec5d30977a06416493ca65ffb2bc1efb368bd71870a20ffcf4a99fc8c30655f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:21:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ee672d236bb08fe51c40da1b138fce5f3b3744b6b9cf85b010fa26ae16304b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ee672d236bb08fe51c40da1b138fce5f3b3744b6b9cf85b010fa26ae16304b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ee672d236bb08fe51c40da1b138fce5f3b3744b6b9cf85b010fa26ae16304b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ee672d236bb08fe51c40da1b138fce5f3b3744b6b9cf85b010fa26ae16304b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ee672d236bb08fe51c40da1b138fce5f3b3744b6b9cf85b010fa26ae16304b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:15 np0005605476 podman[85677]: 2026-02-02 17:21:15.36103538 +0000 UTC m=+0.074207605 container init 5ec5d30977a06416493ca65ffb2bc1efb368bd71870a20ffcf4a99fc8c30655f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:15 np0005605476 podman[85677]: 2026-02-02 17:21:15.365532746 +0000 UTC m=+0.078704951 container start 5ec5d30977a06416493ca65ffb2bc1efb368bd71870a20ffcf4a99fc8c30655f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:21:15 np0005605476 bash[85677]: 5ec5d30977a06416493ca65ffb2bc1efb368bd71870a20ffcf4a99fc8c30655f
Feb  2 12:21:15 np0005605476 podman[85677]: 2026-02-02 17:21:15.301921693 +0000 UTC m=+0.015093918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:15 np0005605476 systemd[1]: Started Ceph osd.0 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: pidfile_write: ignore empty --pid-file
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:15 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb  2 12:21:15 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee400 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbee000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: load: jerasure load: lrc 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fbbefc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount shared_bdev_used = 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Git sha 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DB SUMMARY
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DB Session ID:  F3S1BDDFBRRV7ULM3Y52
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                     Options.env: 0x5572fba7fea0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                Options.info_log: 0x5572fcb068a0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.write_buffer_manager: 0x5572fbae4b40
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Compression algorithms supported:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 060af0a6-c8fc-43b5-89a9-f7a706846619
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875704601, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875705680, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: freelist init
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: freelist _read_cfg
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs umount
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bdev(0x5572fc885800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluefs mount shared_bdev_used = 27262976
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Git sha 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DB SUMMARY
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DB Session ID:  F3S1BDDFBRRV7ULM3Y53
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                     Options.env: 0x5572fba7fce0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                Options.info_log: 0x5572fcb06a40
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.write_buffer_manager: 0x5572fbae4b40
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Compression algorithms supported:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb06bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5572fcb070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5572fba83a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 060af0a6-c8fc-43b5-89a9-f7a706846619
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875748352, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875753304, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052875, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "060af0a6-c8fc-43b5-89a9-f7a706846619", "db_session_id": "F3S1BDDFBRRV7ULM3Y53", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875767331, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052875, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "060af0a6-c8fc-43b5-89a9-f7a706846619", "db_session_id": "F3S1BDDFBRRV7ULM3Y53", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875770257, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052875, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "060af0a6-c8fc-43b5-89a9-f7a706846619", "db_session_id": "F3S1BDDFBRRV7ULM3Y53", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052875771529, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5572fccea000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: DB pointer 0x5572fccc0000
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 460.80 MB usag
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: _get_class not permitted to load lua
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: _get_class not permitted to load sdk
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 load_pgs
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 load_pgs opened 0 pgs
Feb  2 12:21:15 np0005605476 ceph-osd[85696]: osd.0 0 log_to_monitors true
Feb  2 12:21:15 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0[85692]: 2026-02-02T17:21:15.796+0000 7fc676b918c0 -1 osd.0 0 log_to_monitors true
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb  2 12:21:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  2 12:21:15 np0005605476 podman[86238]: 2026-02-02 17:21:15.924400068 +0000 UTC m=+0.040361688 container create 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:21:15 np0005605476 systemd[1]: Started libpod-conmon-114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8.scope.
Feb  2 12:21:15 np0005605476 podman[86238]: 2026-02-02 17:21:15.901375446 +0000 UTC m=+0.017337096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:16 np0005605476 podman[86238]: 2026-02-02 17:21:16.028694155 +0000 UTC m=+0.144655805 container init 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:21:16 np0005605476 podman[86238]: 2026-02-02 17:21:16.035112825 +0000 UTC m=+0.151074435 container start 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:21:16 np0005605476 exciting_feynman[86254]: 167 167
Feb  2 12:21:16 np0005605476 systemd[1]: libpod-114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8.scope: Deactivated successfully.
Feb  2 12:21:16 np0005605476 conmon[86254]: conmon 114e993b9dcf220f7d7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8.scope/container/memory.events
Feb  2 12:21:16 np0005605476 podman[86238]: 2026-02-02 17:21:16.058367811 +0000 UTC m=+0.174329431 container attach 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:16 np0005605476 podman[86238]: 2026-02-02 17:21:16.059563311 +0000 UTC m=+0.175524921 container died 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:21:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e1a76f90798f93fa15d4fd54553437e60e2887bb2a6ff75e66d0711e69738be1-merged.mount: Deactivated successfully.
Feb  2 12:21:16 np0005605476 podman[86238]: 2026-02-02 17:21:16.182537736 +0000 UTC m=+0.298499346 container remove 114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:16 np0005605476 systemd[1]: libpod-conmon-114e993b9dcf220f7d7f8d36ebda1991b6d381fb9c55afff2e0789251ead71c8.scope: Deactivated successfully.
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: Deploying daemon osd.1 on compute-0
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:16 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:16 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.449719939 +0000 UTC m=+0.054412938 container create c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:16 np0005605476 systemd[1]: Started libpod-conmon-c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa.scope.
Feb  2 12:21:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.426958681 +0000 UTC m=+0.031651670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.553766312 +0000 UTC m=+0.158459311 container init c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.559204414 +0000 UTC m=+0.163897383 container start c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.56248409 +0000 UTC m=+0.167177059 container attach c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:16 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test[86300]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 12:21:16 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test[86300]:                            [--no-systemd] [--no-tmpfs]
Feb  2 12:21:16 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test[86300]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 12:21:16 np0005605476 systemd[1]: libpod-c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa.scope: Deactivated successfully.
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.763982723 +0000 UTC m=+0.368675702 container died c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:21:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f7e5dc54594a9d1dee1985d78d41a6d79d07d5d430df75b4ad6b2f17828d696c-merged.mount: Deactivated successfully.
Feb  2 12:21:16 np0005605476 podman[86284]: 2026-02-02 17:21:16.804508644 +0000 UTC m=+0.409201613 container remove c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:21:16 np0005605476 systemd[1]: libpod-conmon-c0ab40c525142edd27f5ea927a6abe74a93d5b1f09689be881535f52651203aa.scope: Deactivated successfully.
Feb  2 12:21:16 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 12:21:16 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 12:21:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:17 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:17 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:17 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:17 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:17 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 done with init, starting boot process
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 start_boot
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 12:21:17 np0005605476 ceph-osd[85696]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2471251250; not ready for session (expect reconnect)
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 12:21:17 np0005605476 ceph-mon[75197]: from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:17 np0005605476 systemd[1]: Starting Ceph osd.1 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:21:17 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:17 np0005605476 podman[86459]: 2026-02-02 17:21:17.713195415 +0000 UTC m=+0.062330243 container create 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:17 np0005605476 podman[86459]: 2026-02-02 17:21:17.675752807 +0000 UTC m=+0.024887715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:17 np0005605476 podman[86459]: 2026-02-02 17:21:17.791645462 +0000 UTC m=+0.140780310 container init 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:17 np0005605476 podman[86459]: 2026-02-02 17:21:17.796684718 +0000 UTC m=+0.145819536 container start 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:21:17 np0005605476 podman[86459]: 2026-02-02 17:21:17.80681201 +0000 UTC m=+0.155946858 container attach 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:17 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:17 np0005605476 bash[86459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:17 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:17 np0005605476 bash[86459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:18 np0005605476 lvm[86560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:18 np0005605476 lvm[86558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:18 np0005605476 lvm[86560]: VG ceph_vg1 finished
Feb  2 12:21:18 np0005605476 lvm[86558]: VG ceph_vg0 finished
Feb  2 12:21:18 np0005605476 lvm[86562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:18 np0005605476 lvm[86562]: VG ceph_vg2 finished
Feb  2 12:21:18 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2471251250; not ready for session (expect reconnect)
Feb  2 12:21:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:18 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:18 np0005605476 ceph-mon[75197]: from='osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:18 np0005605476 bash[86459]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 12:21:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:18 np0005605476 bash[86459]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 12:21:18 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate[86474]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 12:21:18 np0005605476 bash[86459]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 12:21:18 np0005605476 systemd[1]: libpod-6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928.scope: Deactivated successfully.
Feb  2 12:21:18 np0005605476 podman[86459]: 2026-02-02 17:21:18.743021512 +0000 UTC m=+1.092156350 container died 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:21:18 np0005605476 systemd[1]: libpod-6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928.scope: Consumed 1.209s CPU time.
Feb  2 12:21:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d4bc648421e71be064b0b202a58ab50d03aeba9cdb1ba476c8e385fd3f9aaa2e-merged.mount: Deactivated successfully.
Feb  2 12:21:18 np0005605476 podman[86459]: 2026-02-02 17:21:18.828744862 +0000 UTC m=+1.177879710 container remove 6229b534aaed64920979748b7b2a494f2516379f4dc0d8169eb2da9419b22928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1-activate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:21:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:19 np0005605476 podman[86717]: 2026-02-02 17:21:19.017646811 +0000 UTC m=+0.043261338 container create 849770b4bec6f7bda5e71b5d2b63467f261481956bf4b714be6269cbddd8aa42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:21:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898685d04b362b3615be44f4ef9ba6201398fd991d72fa87b426e4a238337a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898685d04b362b3615be44f4ef9ba6201398fd991d72fa87b426e4a238337a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898685d04b362b3615be44f4ef9ba6201398fd991d72fa87b426e4a238337a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898685d04b362b3615be44f4ef9ba6201398fd991d72fa87b426e4a238337a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898685d04b362b3615be44f4ef9ba6201398fd991d72fa87b426e4a238337a9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:19 np0005605476 podman[86717]: 2026-02-02 17:21:19.083014194 +0000 UTC m=+0.108628731 container init 849770b4bec6f7bda5e71b5d2b63467f261481956bf4b714be6269cbddd8aa42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:19 np0005605476 podman[86717]: 2026-02-02 17:21:18.993437248 +0000 UTC m=+0.019051815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:19 np0005605476 podman[86717]: 2026-02-02 17:21:19.093503963 +0000 UTC m=+0.119118490 container start 849770b4bec6f7bda5e71b5d2b63467f261481956bf4b714be6269cbddd8aa42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:21:19 np0005605476 bash[86717]: 849770b4bec6f7bda5e71b5d2b63467f261481956bf4b714be6269cbddd8aa42
Feb  2 12:21:19 np0005605476 systemd[1]: Started Ceph osd.1 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: pidfile_write: ignore empty --pid-file
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb  2 12:21:19 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52400 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a52000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 51.756 iops: 13249.628 elapsed_sec: 0.226
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [WRN] : OSD bench result of 13249.628189 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 0 waiting for initial osdmap
Feb  2 12:21:19 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0[85692]: 2026-02-02T17:21:19.303+0000 7fc673325640 -1 osd.0 0 waiting for initial osdmap
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 set_numa_affinity not setting numa affinity
Feb  2 12:21:19 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-0[85692]: 2026-02-02T17:21:19.325+0000 7fc66d918640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:19 np0005605476 ceph-osd[85696]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: load: jerasure load: lrc 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2471251250; not ready for session (expect reconnect)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:19 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b25a53c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount shared_bdev_used = 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Git sha 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DB SUMMARY
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DB Session ID:  Y146F2H9GA7EWZBZ8FJ0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                     Options.env: 0x555b258e3ea0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                Options.info_log: 0x555b2696a8a0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.write_buffer_manager: 0x555b25944b40
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Compression algorithms supported:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-mgr[75493]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5d11a463-af92-4391-b266-82d31b19ee87
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879480486, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879482234, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: freelist init
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: freelist _read_cfg
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs umount
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bdev(0x555b266e9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluefs mount shared_bdev_used = 27262976
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Git sha 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DB SUMMARY
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DB Session ID:  Y146F2H9GA7EWZBZ8FJ1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                     Options.env: 0x555b258e3ce0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                Options.info_log: 0x555b2696a960
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.write_buffer_manager: 0x555b25945900
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Compression algorithms supported:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696b840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696bd80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696bd80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555b2696bd80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555b258e7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5d11a463-af92-4391-b266-82d31b19ee87
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879551170, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879558514, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d11a463-af92-4391-b266-82d31b19ee87", "db_session_id": "Y146F2H9GA7EWZBZ8FJ1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879562923, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d11a463-af92-4391-b266-82d31b19ee87", "db_session_id": "Y146F2H9GA7EWZBZ8FJ1", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879565561, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d11a463-af92-4391-b266-82d31b19ee87", "db_session_id": "Y146F2H9GA7EWZBZ8FJ1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052879567009, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555b26b4e000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: DB pointer 0x555b26b24000
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 460.80 MB usag
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: _get_class not permitted to load lua
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: _get_class not permitted to load sdk
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 load_pgs
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 load_pgs opened 0 pgs
Feb  2 12:21:19 np0005605476 ceph-osd[86737]: osd.1 0 log_to_monitors true
Feb  2 12:21:19 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1[86733]: 2026-02-02T17:21:19.594+0000 7fe433b5a8c0 -1 osd.1 0 log_to_monitors true
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb  2 12:21:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.668754734 +0000 UTC m=+0.032357302 container create 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:19 np0005605476 systemd[1]: Started libpod-conmon-8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d.scope.
Feb  2 12:21:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.740820672 +0000 UTC m=+0.104423320 container init 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.746832365 +0000 UTC m=+0.110434943 container start 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.75007713 +0000 UTC m=+0.113679808 container attach 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.652415376 +0000 UTC m=+0.016017964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:19 np0005605476 pedantic_golick[87293]: 167 167
Feb  2 12:21:19 np0005605476 systemd[1]: libpod-8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d.scope: Deactivated successfully.
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.752280338 +0000 UTC m=+0.115882946 container died 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-59ad586704bdbe2af869fc551efab56218ebef1487720bc4585b702e9192cac7-merged.mount: Deactivated successfully.
Feb  2 12:21:19 np0005605476 podman[87277]: 2026-02-02 17:21:19.792117906 +0000 UTC m=+0.155720504 container remove 8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:21:19 np0005605476 systemd[1]: libpod-conmon-8c1f775fb95610bf2654fc67fd2254abf18f7139f8d37fe36d37a827f5e4543d.scope: Deactivated successfully.
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.024886522 +0000 UTC m=+0.045359414 container create 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:20 np0005605476 systemd[1]: Started libpod-conmon-4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560.scope.
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.001836019 +0000 UTC m=+0.022309001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:20 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.129025577 +0000 UTC m=+0.149498529 container init 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.136671857 +0000 UTC m=+0.157144749 container start 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.139537806 +0000 UTC m=+0.160010918 container attach 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250] boot
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 12:21:20 np0005605476 ceph-osd[85696]: osd.0 9 state: booting -> active
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:20 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:20 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:20 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test[87337]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 12:21:20 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test[87337]:                            [--no-systemd] [--no-tmpfs]
Feb  2 12:21:20 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test[87337]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 12:21:20 np0005605476 systemd[1]: libpod-4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560.scope: Deactivated successfully.
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.338800971 +0000 UTC m=+0.359273883 container died 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:21:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d2b4ba06e9840df1c7caa8a8e3ece0f66b21e35d7f6c6ecff14ef37ea6bd27d4-merged.mount: Deactivated successfully.
Feb  2 12:21:20 np0005605476 podman[87321]: 2026-02-02 17:21:20.382807961 +0000 UTC m=+0.403280883 container remove 4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:21:20 np0005605476 systemd[1]: libpod-conmon-4f756e4a16ed42a3cfe7c9ad43d25181b9c27d1e658fc311e9c3d309dc85b560.scope: Deactivated successfully.
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: Deploying daemon osd.2 on compute-0
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: OSD bench result of 13249.628189 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: osd.0 [v2:192.168.122.100:6802/2471251250,v1:192.168.122.100:6803/2471251250] boot
Feb  2 12:21:20 np0005605476 ceph-mon[75197]: from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:20 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:20 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 12:21:20 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 12:21:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb  2 12:21:20 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:20 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:20 np0005605476 systemd[1]: Reloading.
Feb  2 12:21:20 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:21:20 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:21:21 np0005605476 systemd[1]: Starting Ceph osd.2 for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 done with init, starting boot process
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 start_boot
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 12:21:21 np0005605476 ceph-osd[86737]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:21 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:21 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:21 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2577845913; not ready for session (expect reconnect)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:21 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:21 np0005605476 podman[87496]: 2026-02-02 17:21:21.294665756 +0000 UTC m=+0.042720829 container create b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:21:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:21 np0005605476 podman[87496]: 2026-02-02 17:21:21.367713691 +0000 UTC m=+0.115768784 container init b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:21 np0005605476 podman[87496]: 2026-02-02 17:21:21.269933445 +0000 UTC m=+0.017988548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:21 np0005605476 podman[87496]: 2026-02-02 17:21:21.377377726 +0000 UTC m=+0.125432809 container start b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:21 np0005605476 podman[87496]: 2026-02-02 17:21:21.384274053 +0000 UTC m=+0.132329126 container attach b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: from='osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:21 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] creating mgr pool
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb  2 12:21:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  2 12:21:21 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:21 np0005605476 bash[87496]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:21 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:21 np0005605476 bash[87496]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:22 np0005605476 lvm[87600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:22 np0005605476 lvm[87597]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:22 np0005605476 lvm[87600]: VG ceph_vg1 finished
Feb  2 12:21:22 np0005605476 lvm[87597]: VG ceph_vg0 finished
Feb  2 12:21:22 np0005605476 lvm[87602]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:22 np0005605476 lvm[87602]: VG ceph_vg2 finished
Feb  2 12:21:22 np0005605476 lvm[87603]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:22 np0005605476 lvm[87603]: VG ceph_vg0 finished
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:22 np0005605476 bash[87496]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 12:21:22 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2577845913; not ready for session (expect reconnect)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:22 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:22 np0005605476 bash[87496]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 12:21:22 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate[87512]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 12:21:22 np0005605476 bash[87496]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 12:21:22 np0005605476 systemd[1]: libpod-b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6.scope: Deactivated successfully.
Feb  2 12:21:22 np0005605476 systemd[1]: libpod-b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6.scope: Consumed 1.262s CPU time.
Feb  2 12:21:22 np0005605476 podman[87496]: 2026-02-02 17:21:22.386206124 +0000 UTC m=+1.134261227 container died b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:21:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd6e84b330648db5ef842711970211166fbc67a75d8d915cae37ebeb7e891bc4-merged.mount: Deactivated successfully.
Feb  2 12:21:22 np0005605476 podman[87496]: 2026-02-02 17:21:22.466724645 +0000 UTC m=+1.214779728 container remove b8be45a82db41706102416ec50af295c79407f4a2d40f9e46607d771ff45bbd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb  2 12:21:22 np0005605476 ceph-osd[85696]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 12:21:22 np0005605476 ceph-osd[85696]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb  2 12:21:22 np0005605476 ceph-osd[85696]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:22 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:22 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  2 12:21:22 np0005605476 podman[87773]: 2026-02-02 17:21:22.644647317 +0000 UTC m=+0.044021751 container create 49ee9de1004adc227a28add932102ae6c092e83b89c3463cc66529c1092a3071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v25: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb  2 12:21:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2987def45149fc5249cbc723bda35e766362949640d794514681c630a9cc739a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2987def45149fc5249cbc723bda35e766362949640d794514681c630a9cc739a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2987def45149fc5249cbc723bda35e766362949640d794514681c630a9cc739a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2987def45149fc5249cbc723bda35e766362949640d794514681c630a9cc739a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2987def45149fc5249cbc723bda35e766362949640d794514681c630a9cc739a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:22 np0005605476 podman[87773]: 2026-02-02 17:21:22.712173077 +0000 UTC m=+0.111547531 container init 49ee9de1004adc227a28add932102ae6c092e83b89c3463cc66529c1092a3071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:21:22 np0005605476 podman[87773]: 2026-02-02 17:21:22.620156249 +0000 UTC m=+0.019530743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:22 np0005605476 podman[87773]: 2026-02-02 17:21:22.722708937 +0000 UTC m=+0.122083371 container start 49ee9de1004adc227a28add932102ae6c092e83b89c3463cc66529c1092a3071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:22 np0005605476 bash[87773]: 49ee9de1004adc227a28add932102ae6c092e83b89c3463cc66529c1092a3071
Feb  2 12:21:22 np0005605476 systemd[1]: Started Ceph osd.2 for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: pidfile_write: ignore empty --pid-file
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e400 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559e000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: load: jerasure load: lrc 
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:22 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 48.143 iops: 12324.659 elapsed_sec: 0.243
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: log_channel(cluster) log [WRN] : OSD bench result of 12324.658774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 0 waiting for initial osdmap
Feb  2 12:21:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1[86733]: 2026-02-02T17:21:23.072+0000 7fe42fadc640 -1 osd.1 0 waiting for initial osdmap
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-1[86733]: 2026-02-02T17:21:23.093+0000 7fe42a8e1640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 set_numa_affinity not setting numa affinity
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x56108559fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount shared_bdev_used = 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Git sha 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DB SUMMARY
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DB Session ID:  ZU6ODQSEKL4CAS6Z4D8S
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                     Options.env: 0x56108542fea0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                Options.info_log: 0x5610864808a0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.write_buffer_manager: 0x561085494b40
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Compression algorithms supported:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5610854338d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086480c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf4913e5-61bc-45cf-84cf-f743c000e6ec
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883177487, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883178371, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: freelist init
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: freelist _read_cfg
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs umount
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bdev(0x561086235800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluefs mount shared_bdev_used = 27262976
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: RocksDB version: 7.9.2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Git sha 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DB SUMMARY
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DB Session ID:  ZU6ODQSEKL4CAS6Z4D8T
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: CURRENT file:  CURRENT
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.error_if_exists: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.create_if_missing: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                     Options.env: 0x56108542fd50
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                Options.info_log: 0x561086481100
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.statistics: (nil)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.use_fsync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.db_log_dir: 
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.write_buffer_manager: 0x561085494b40
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.unordered_write: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.row_cache: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                              Options.wal_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.two_write_queues: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.wal_compression: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.atomic_flush: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_background_jobs: 4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_background_compactions: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_subcompactions: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.max_open_files: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Compression algorithms supported:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZSTD supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kXpressCompression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kZlibCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2577845913; not ready for session (expect reconnect)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085432430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481900)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481900)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:           Options.merge_operator: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561086481900)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561085433350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.compression: LZ4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.num_levels: 7
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.223280975 +0000 UTC m=+0.046137967 container create d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.bloom_locality: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                               Options.ttl: 2592000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                       Options.enable_blob_files: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                           Options.min_blob_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf4913e5-61bc-45cf-84cf-f743c000e6ec
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883223993, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883228567, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf4913e5-61bc-45cf-84cf-f743c000e6ec", "db_session_id": "ZU6ODQSEKL4CAS6Z4D8T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883235574, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf4913e5-61bc-45cf-84cf-f743c000e6ec", "db_session_id": "ZU6ODQSEKL4CAS6Z4D8T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883238472, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf4913e5-61bc-45cf-84cf-f743c000e6ec", "db_session_id": "ZU6ODQSEKL4CAS6Z4D8T", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770052883240195, "job": 1, "event": "recovery_finished"}
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 12:21:23 np0005605476 systemd[1]: Started libpod-conmon-d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724.scope.
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561086665c00
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: DB pointer 0x56108663a000
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 460.80 MB usag
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: _get_class not permitted to load lua
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: _get_class not permitted to load sdk
Feb  2 12:21:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 load_pgs
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 load_pgs opened 0 pgs
Feb  2 12:21:23 np0005605476 ceph-osd[87792]: osd.2 0 log_to_monitors true
Feb  2 12:21:23 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2[87788]: 2026-02-02T17:21:23.275+0000 7fc3d3a898c0 -1 osd.2 0 log_to_monitors true
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.197630958 +0000 UTC m=+0.020487990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.307184835 +0000 UTC m=+0.130041827 container init d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.31514749 +0000 UTC m=+0.138004502 container start d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.319091348 +0000 UTC m=+0.141948360 container attach d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:21:23 np0005605476 amazing_hopper[88285]: 167 167
Feb  2 12:21:23 np0005605476 systemd[1]: libpod-d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724.scope: Deactivated successfully.
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.320589413 +0000 UTC m=+0.143446405 container died d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d740cf43e0af9b5f3f48be3f9a2933ce7c8d2d7f875d340fd57e639b85482ee1-merged.mount: Deactivated successfully.
Feb  2 12:21:23 np0005605476 podman[87942]: 2026-02-02 17:21:23.362570549 +0000 UTC m=+0.185427541 container remove d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb  2 12:21:23 np0005605476 systemd[1]: libpod-conmon-d9b5c284b2e351b3a62662d4d1b3e319457fe42a0e0c44e792c8d2954ac80724.scope: Deactivated successfully.
Feb  2 12:21:23 np0005605476 podman[88343]: 2026-02-02 17:21:23.480924685 +0000 UTC m=+0.041910695 container create 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:23 np0005605476 systemd[1]: Started libpod-conmon-6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf.scope.
Feb  2 12:21:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbea04b9c0e451e9a1ec8dd3163c77406eb5bfe4eb9c71a41724f3c9e97b1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbea04b9c0e451e9a1ec8dd3163c77406eb5bfe4eb9c71a41724f3c9e97b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbea04b9c0e451e9a1ec8dd3163c77406eb5bfe4eb9c71a41724f3c9e97b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecbea04b9c0e451e9a1ec8dd3163c77406eb5bfe4eb9c71a41724f3c9e97b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913] boot
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Feb  2 12:21:23 np0005605476 podman[88343]: 2026-02-02 17:21:23.546806068 +0000 UTC m=+0.107792158 container init 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 12 state: booting -> active
Feb  2 12:21:23 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:23 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  2 12:21:23 np0005605476 podman[88343]: 2026-02-02 17:21:23.459130814 +0000 UTC m=+0.020116914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:23 np0005605476 podman[88343]: 2026-02-02 17:21:23.559983942 +0000 UTC m=+0.120969972 container start 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:23 np0005605476 podman[88343]: 2026-02-02 17:21:23.563246778 +0000 UTC m=+0.124232808 container attach 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:24 np0005605476 lvm[88435]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:24 np0005605476 lvm[88435]: VG ceph_vg0 finished
Feb  2 12:21:24 np0005605476 lvm[88437]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:24 np0005605476 lvm[88437]: VG ceph_vg1 finished
Feb  2 12:21:24 np0005605476 lvm[88438]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:24 np0005605476 lvm[88438]: VG ceph_vg2 finished
Feb  2 12:21:24 np0005605476 upbeat_rubin[88360]: {}
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 12:21:24 np0005605476 systemd[1]: libpod-6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf.scope: Deactivated successfully.
Feb  2 12:21:24 np0005605476 systemd[1]: libpod-6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf.scope: Consumed 1.075s CPU time.
Feb  2 12:21:24 np0005605476 podman[88343]: 2026-02-02 17:21:24.330910637 +0000 UTC m=+0.891896657 container died 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb  2 12:21:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8ecbea04b9c0e451e9a1ec8dd3163c77406eb5bfe4eb9c71a41724f3c9e97b1d-merged.mount: Deactivated successfully.
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 done with init, starting boot process
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 start_boot
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 12:21:24 np0005605476 ceph-osd[87792]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:24 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2466851707; not ready for session (expect reconnect)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: OSD bench result of 12324.658774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: osd.1 [v2:192.168.122.100:6806/2577845913,v1:192.168.122.100:6807/2577845913] boot
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 12:21:24 np0005605476 podman[88343]: 2026-02-02 17:21:24.790779081 +0000 UTC m=+1.351765101 container remove 6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rubin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] creating main.db for devicehealth
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:24 np0005605476 systemd[1]: libpod-conmon-6a7b78f144194a77933135dd4767a5c5ae0d721a6ebe444bbcbe0834e2a425bf.scope: Deactivated successfully.
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 12:21:24 np0005605476 ceph-mgr[75493]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 12:21:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 12:21:25 np0005605476 podman[88587]: 2026-02-02 17:21:25.488859405 +0000 UTC m=+0.171411841 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:21:25 np0005605476 podman[88607]: 2026-02-02 17:21:25.683340979 +0000 UTC m=+0.106636958 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:25 np0005605476 podman[88587]: 2026-02-02 17:21:25.711499379 +0000 UTC m=+0.394051845 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb  2 12:21:25 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2466851707; not ready for session (expect reconnect)
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:25 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:25 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: from='osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.700367768 +0000 UTC m=+0.041715772 container create ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:26 np0005605476 systemd[1]: Started libpod-conmon-ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84.scope.
Feb  2 12:21:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.677971296 +0000 UTC m=+0.019319320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.778339116 +0000 UTC m=+0.119687150 container init ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.783522264 +0000 UTC m=+0.124870278 container start ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:26 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2466851707; not ready for session (expect reconnect)
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:26 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:26 np0005605476 goofy_yonath[88815]: 167 167
Feb  2 12:21:26 np0005605476 systemd[1]: libpod-ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84.scope: Deactivated successfully.
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.788958867 +0000 UTC m=+0.130306891 container attach ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.789284312 +0000 UTC m=+0.130632316 container died ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4724b4a80acef4e2807a7a7d1576161e178578df526b5b3e73310b45a597b6ed-merged.mount: Deactivated successfully.
Feb  2 12:21:26 np0005605476 podman[88799]: 2026-02-02 17:21:26.862376308 +0000 UTC m=+0.203724322 container remove ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:26 np0005605476 systemd[1]: libpod-conmon-ed505b901080540fd802a26f3423bdde654e0a5f6f96c9502c17699f14290e84.scope: Deactivated successfully.
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.005267292 +0000 UTC m=+0.069478144 container create 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:26.96283881 +0000 UTC m=+0.027049652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:27 np0005605476 systemd[1]: Started libpod-conmon-0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd.scope.
Feb  2 12:21:27 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:27 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74cd2f207cb7a76263e6e08c6fc5f1ded796735a4fc54516ff75e313b023cd6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:27 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74cd2f207cb7a76263e6e08c6fc5f1ded796735a4fc54516ff75e313b023cd6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:27 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74cd2f207cb7a76263e6e08c6fc5f1ded796735a4fc54516ff75e313b023cd6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:27 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74cd2f207cb7a76263e6e08c6fc5f1ded796735a4fc54516ff75e313b023cd6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.108965459 +0000 UTC m=+0.173176291 container init 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.11490859 +0000 UTC m=+0.179119402 container start 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.124120657 +0000 UTC m=+0.188331499 container attach 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hccdnu(active, since 50s)
Feb  2 12:21:27 np0005605476 clever_herschel[88855]: [
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:    {
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "available": false,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "being_replaced": false,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "ceph_device_lvm": false,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "lsm_data": {},
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "lvs": [],
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "path": "/dev/sr0",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "rejected_reasons": [
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "Insufficient space (<5GB)",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "Has a FileSystem"
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        ],
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        "sys_api": {
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "actuators": null,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "device_nodes": [
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:                "sr0"
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            ],
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "devname": "sr0",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "human_readable_size": "482.00 KB",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "id_bus": "ata",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "model": "QEMU DVD-ROM",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "nr_requests": "2",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "parent": "/dev/sr0",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "partitions": {},
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "path": "/dev/sr0",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "removable": "1",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "rev": "2.5+",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "ro": "0",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "rotational": "1",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "sas_address": "",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "sas_device_handle": "",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "scheduler_mode": "mq-deadline",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "sectors": 0,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "sectorsize": "2048",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "size": 493568.0,
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "support_discard": "2048",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "type": "disk",
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:            "vendor": "QEMU"
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:        }
Feb  2 12:21:27 np0005605476 clever_herschel[88855]:    }
Feb  2 12:21:27 np0005605476 clever_herschel[88855]: ]
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 50.302 iops: 12877.356 elapsed_sec: 0.233
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [WRN] : OSD bench result of 12877.356499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 0 waiting for initial osdmap
Feb  2 12:21:27 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2[87788]: 2026-02-02T17:21:27.559+0000 7fc3cfa0b640 -1 osd.2 0 waiting for initial osdmap
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 12:21:27 np0005605476 systemd[1]: libpod-0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd.scope: Deactivated successfully.
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.569453765 +0000 UTC m=+0.633664577 container died 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:27 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-osd-2[87788]: 2026-02-02T17:21:27.582+0000 7fc3ca810640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 set_numa_affinity not setting numa affinity
Feb  2 12:21:27 np0005605476 ceph-osd[87792]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb  2 12:21:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-74cd2f207cb7a76263e6e08c6fc5f1ded796735a4fc54516ff75e313b023cd6e-merged.mount: Deactivated successfully.
Feb  2 12:21:27 np0005605476 podman[88839]: 2026-02-02 17:21:27.600715278 +0000 UTC m=+0.664926080 container remove 0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_herschel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:21:27 np0005605476 systemd[1]: libpod-conmon-0b2b93ebb3d1611da6f994f552cebddf2be79e5b025ee76d5fba6f8f49f3bbcd.scope: Deactivated successfully.
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43683k
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43683k
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2466851707; not ready for session (expect reconnect)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:27 np0005605476 ceph-mgr[75493]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.031155202 +0000 UTC m=+0.035354394 container create f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:28 np0005605476 systemd[1]: Started libpod-conmon-f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4.scope.
Feb  2 12:21:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.073287529 +0000 UTC m=+0.077486741 container init f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.077870418 +0000 UTC m=+0.082069620 container start f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:28 np0005605476 infallible_swanson[89598]: 167 167
Feb  2 12:21:28 np0005605476 systemd[1]: libpod-f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4.scope: Deactivated successfully.
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.081192454 +0000 UTC m=+0.085391666 container attach f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.082155931 +0000 UTC m=+0.086355123 container died f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:21:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f0adaa2f2466002d90ac18ae6e65d45083a8f53060c4936fb91614cadfcc5337-merged.mount: Deactivated successfully.
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.016921399 +0000 UTC m=+0.021120671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:28 np0005605476 podman[89582]: 2026-02-02 17:21:28.115447727 +0000 UTC m=+0.119646929 container remove f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_swanson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  2 12:21:28 np0005605476 systemd[1]: libpod-conmon-f373f826a3216745947ceb1f0dad52f0da8b19ebb14ddfc1ad1357abcaebcfa4.scope: Deactivated successfully.
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.229461319 +0000 UTC m=+0.037144073 container create a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 12:21:28 np0005605476 systemd[1]: Started libpod-conmon-a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799.scope.
Feb  2 12:21:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.213851993 +0000 UTC m=+0.021534777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.319185018 +0000 UTC m=+0.126867802 container init a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: OSD bench result of 12877.356499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.324547909 +0000 UTC m=+0.132230663 container start a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.328206152 +0000 UTC m=+0.135888926 container attach a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707] boot
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 12:21:28 np0005605476 ceph-osd[87792]: osd.2 15 state: booting -> active
Feb  2 12:21:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb  2 12:21:28 np0005605476 agitated_jones[89639]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:21:28 np0005605476 agitated_jones[89639]: --> All data devices are unavailable
Feb  2 12:21:28 np0005605476 systemd[1]: libpod-a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799.scope: Deactivated successfully.
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.702822314 +0000 UTC m=+0.510505088 container died a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:21:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-de2c0582cff0524c41134cbb7140f60ee9d5e51c30d255e1165df26d14825ba5-merged.mount: Deactivated successfully.
Feb  2 12:21:28 np0005605476 podman[89623]: 2026-02-02 17:21:28.735023893 +0000 UTC m=+0.542706647 container remove a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:28 np0005605476 systemd[1]: libpod-conmon-a31c40ee9bd0479f4a52b144613afbd8d1c43ef0b0d156f861907c621843f799.scope: Deactivated successfully.
Feb  2 12:21:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.12631574 +0000 UTC m=+0.045791311 container create 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:21:29 np0005605476 systemd[1]: Started libpod-conmon-87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960.scope.
Feb  2 12:21:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.101592699 +0000 UTC m=+0.021068320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.206472746 +0000 UTC m=+0.125948307 container init 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.214401991 +0000 UTC m=+0.133877542 container start 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:29 np0005605476 sharp_almeida[89750]: 167 167
Feb  2 12:21:29 np0005605476 systemd[1]: libpod-87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960.scope: Deactivated successfully.
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.218598982 +0000 UTC m=+0.138074523 container attach 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.219576419 +0000 UTC m=+0.139051980 container died 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-de496f6ad52b5cae16c3f5dad5f73728e04e53fa9a077e60e75b2de458d11503-merged.mount: Deactivated successfully.
Feb  2 12:21:29 np0005605476 podman[89734]: 2026-02-02 17:21:29.250280272 +0000 UTC m=+0.169755803 container remove 87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_almeida, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb  2 12:21:29 np0005605476 systemd[1]: libpod-conmon-87be30e7de52cee71fbed150917468aba1505a65cd4049d8c03d2d7ab38ce960.scope: Deactivated successfully.
Feb  2 12:21:29 np0005605476 ceph-mon[75197]: Adjusting osd_memory_target on compute-0 to 43683k
Feb  2 12:21:29 np0005605476 ceph-mon[75197]: Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Feb  2 12:21:29 np0005605476 ceph-mon[75197]: osd.2 [v2:192.168.122.100:6810/2466851707,v1:192.168.122.100:6811/2466851707] boot
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.36283048 +0000 UTC m=+0.030636503 container create 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:21:29 np0005605476 systemd[1]: Started libpod-conmon-6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a.scope.
Feb  2 12:21:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33566c328133f9d27b2ff91468164bc4438aa5165fc00ec5a2d3130d64d3b5e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33566c328133f9d27b2ff91468164bc4438aa5165fc00ec5a2d3130d64d3b5e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33566c328133f9d27b2ff91468164bc4438aa5165fc00ec5a2d3130d64d3b5e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33566c328133f9d27b2ff91468164bc4438aa5165fc00ec5a2d3130d64d3b5e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.439925853 +0000 UTC m=+0.107731906 container init 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.444866527 +0000 UTC m=+0.112672540 container start 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.349493732 +0000 UTC m=+0.017299795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.448152603 +0000 UTC m=+0.115958646 container attach 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]: {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    "0": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "devices": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "/dev/loop3"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            ],
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_name": "ceph_lv0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_size": "21470642176",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "name": "ceph_lv0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "tags": {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.crush_device_class": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.encrypted": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_id": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.vdo": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.with_tpm": "0"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            },
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "vg_name": "ceph_vg0"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        }
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    ],
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    "1": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "devices": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "/dev/loop4"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            ],
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_name": "ceph_lv1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_size": "21470642176",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "name": "ceph_lv1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "tags": {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.crush_device_class": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.encrypted": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_id": "1",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.vdo": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.with_tpm": "0"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            },
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "vg_name": "ceph_vg1"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        }
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    ],
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    "2": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "devices": [
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "/dev/loop5"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            ],
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_name": "ceph_lv2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_size": "21470642176",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "name": "ceph_lv2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "tags": {
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.crush_device_class": "",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.encrypted": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osd_id": "2",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.vdo": "0",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:                "ceph.with_tpm": "0"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            },
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "type": "block",
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:            "vg_name": "ceph_vg2"
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:        }
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]:    ]
Feb  2 12:21:29 np0005605476 quirky_blackwell[89791]: }
Feb  2 12:21:29 np0005605476 systemd[1]: libpod-6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a.scope: Deactivated successfully.
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.739520498 +0000 UTC m=+0.407326511 container died 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-33566c328133f9d27b2ff91468164bc4438aa5165fc00ec5a2d3130d64d3b5e5-merged.mount: Deactivated successfully.
Feb  2 12:21:29 np0005605476 podman[89774]: 2026-02-02 17:21:29.776183182 +0000 UTC m=+0.443989205 container remove 6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:29 np0005605476 systemd[1]: libpod-conmon-6651ca6aaef6fd042f9934358903b80c5fa27742861180ec0272b5853026af3a.scope: Deactivated successfully.
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.246204181 +0000 UTC m=+0.052639128 container create f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:30 np0005605476 systemd[1]: Started libpod-conmon-f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3.scope.
Feb  2 12:21:30 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.317505846 +0000 UTC m=+0.123940833 container init f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.225738062 +0000 UTC m=+0.032172979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.322264387 +0000 UTC m=+0.128699314 container start f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.325700695 +0000 UTC m=+0.132135672 container attach f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:21:30 np0005605476 epic_wiles[89889]: 167 167
Feb  2 12:21:30 np0005605476 systemd[1]: libpod-f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3.scope: Deactivated successfully.
Feb  2 12:21:30 np0005605476 conmon[89889]: conmon f3f5b6235f6ca2124e7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3.scope/container/memory.events
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.328182387 +0000 UTC m=+0.134617304 container died f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:21:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb  2 12:21:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb  2 12:21:30 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fbe8600403303aae0fcf0955a0de09c3e0d05e8fd883460bc40bba75cafbbf9e-merged.mount: Deactivated successfully.
Feb  2 12:21:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb  2 12:21:30 np0005605476 podman[89873]: 2026-02-02 17:21:30.35710558 +0000 UTC m=+0.163540497 container remove f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:30 np0005605476 systemd[1]: libpod-conmon-f3f5b6235f6ca2124e7e5f5c6ac0e0fb56e4b5123edb4d0a2cc8acd59d293ed3.scope: Deactivated successfully.
Feb  2 12:21:30 np0005605476 podman[89912]: 2026-02-02 17:21:30.474995489 +0000 UTC m=+0.044018831 container create 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:30 np0005605476 systemd[1]: Started libpod-conmon-7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f.scope.
Feb  2 12:21:30 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144dab67b13dd9bdd28a9b64ecf175b66443a06e05e0321686441f6ac0206e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144dab67b13dd9bdd28a9b64ecf175b66443a06e05e0321686441f6ac0206e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144dab67b13dd9bdd28a9b64ecf175b66443a06e05e0321686441f6ac0206e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144dab67b13dd9bdd28a9b64ecf175b66443a06e05e0321686441f6ac0206e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:30 np0005605476 podman[89912]: 2026-02-02 17:21:30.549253414 +0000 UTC m=+0.118276776 container init 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:30 np0005605476 podman[89912]: 2026-02-02 17:21:30.457575062 +0000 UTC m=+0.026598424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:30 np0005605476 podman[89912]: 2026-02-02 17:21:30.554219929 +0000 UTC m=+0.123243371 container start 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:30 np0005605476 podman[89912]: 2026-02-02 17:21:30.557550315 +0000 UTC m=+0.126573677 container attach 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:21:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Feb  2 12:21:31 np0005605476 lvm[90004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:31 np0005605476 lvm[90004]: VG ceph_vg0 finished
Feb  2 12:21:31 np0005605476 lvm[90007]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:31 np0005605476 lvm[90007]: VG ceph_vg1 finished
Feb  2 12:21:31 np0005605476 lvm[90009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:31 np0005605476 lvm[90009]: VG ceph_vg2 finished
Feb  2 12:21:31 np0005605476 lvm[90010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:31 np0005605476 lvm[90010]: VG ceph_vg0 finished
Feb  2 12:21:31 np0005605476 stoic_jennings[89928]: {}
Feb  2 12:21:31 np0005605476 systemd[1]: libpod-7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f.scope: Deactivated successfully.
Feb  2 12:21:31 np0005605476 podman[89912]: 2026-02-02 17:21:31.300011425 +0000 UTC m=+0.869034757 container died 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:31 np0005605476 systemd[1]: libpod-7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f.scope: Consumed 1.002s CPU time.
Feb  2 12:21:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b144dab67b13dd9bdd28a9b64ecf175b66443a06e05e0321686441f6ac0206e5-merged.mount: Deactivated successfully.
Feb  2 12:21:31 np0005605476 podman[89912]: 2026-02-02 17:21:31.338962679 +0000 UTC m=+0.907986021 container remove 7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jennings, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:21:31 np0005605476 systemd[1]: libpod-conmon-7da3a8958a32bcf16bb502883792421327271f8ecba4c0a97b86762abebf044f.scope: Deactivated successfully.
Feb  2 12:21:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:31 np0005605476 podman[90143]: 2026-02-02 17:21:31.897927932 +0000 UTC m=+0.051819354 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:31 np0005605476 podman[90143]: 2026-02-02 17:21:31.986482851 +0000 UTC m=+0.140374283 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.745882079 +0000 UTC m=+0.034844494 container create a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:32 np0005605476 systemd[1]: Started libpod-conmon-a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa.scope.
Feb  2 12:21:32 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.812408783 +0000 UTC m=+0.101371278 container init a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.817273766 +0000 UTC m=+0.106236181 container start a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.820886127 +0000 UTC m=+0.109848562 container attach a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:21:32 np0005605476 jovial_benz[90373]: 167 167
Feb  2 12:21:32 np0005605476 systemd[1]: libpod-a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa.scope: Deactivated successfully.
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.821778793 +0000 UTC m=+0.110741208 container died a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.73183285 +0000 UTC m=+0.020795285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:32 np0005605476 systemd[1]: var-lib-containers-storage-overlay-67b764c565da2faf2434f89e10792148ef142e6c7fbf8a5dec77b6d0b5146733-merged.mount: Deactivated successfully.
Feb  2 12:21:32 np0005605476 podman[90356]: 2026-02-02 17:21:32.850066355 +0000 UTC m=+0.139028770 container remove a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:21:32 np0005605476 systemd[1]: libpod-conmon-a287efd924d0cf833c9123ab7a9377d34fd314ef7711cb69ef00be6e50c0ccfa.scope: Deactivated successfully.
Feb  2 12:21:32 np0005605476 podman[90397]: 2026-02-02 17:21:32.976080662 +0000 UTC m=+0.037323787 container create cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:33 np0005605476 systemd[1]: Started libpod-conmon-cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb.scope.
Feb  2 12:21:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:33.042725057 +0000 UTC m=+0.103968212 container init cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:33.048286842 +0000 UTC m=+0.109529977 container start cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:33.051636559 +0000 UTC m=+0.112879714 container attach cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:32.960289902 +0000 UTC m=+0.021533037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:33 np0005605476 sleepy_gauss[90413]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:21:33 np0005605476 sleepy_gauss[90413]: --> All data devices are unavailable
Feb  2 12:21:33 np0005605476 systemd[1]: libpod-cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb.scope: Deactivated successfully.
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:33.46781422 +0000 UTC m=+0.529057345 container died cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 12:21:33 np0005605476 systemd[1]: var-lib-containers-storage-overlay-647800c04f9410e979a4a311f483d2b28c7aacdbf22f7152fda0f24673531bbb-merged.mount: Deactivated successfully.
Feb  2 12:21:33 np0005605476 podman[90397]: 2026-02-02 17:21:33.51124089 +0000 UTC m=+0.572484025 container remove cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:33 np0005605476 systemd[1]: libpod-conmon-cf5922339091a6ed2d3a528d7deeb065367009b8acc1d079fca58588be665ddb.scope: Deactivated successfully.
Feb  2 12:21:33 np0005605476 python3[90470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:33 np0005605476 podman[90520]: 2026-02-02 17:21:33.698205705 +0000 UTC m=+0.042702458 container create 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:21:33 np0005605476 systemd[1]: Started libpod-conmon-757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30.scope.
Feb  2 12:21:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff372d643618605a183562b3570d1671cf158293f076ed908aa0fb0ab7392ea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff372d643618605a183562b3570d1671cf158293f076ed908aa0fb0ab7392ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff372d643618605a183562b3570d1671cf158293f076ed908aa0fb0ab7392ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:33 np0005605476 podman[90520]: 2026-02-02 17:21:33.765457371 +0000 UTC m=+0.109954144 container init 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:33 np0005605476 podman[90520]: 2026-02-02 17:21:33.680250249 +0000 UTC m=+0.024747062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:33 np0005605476 podman[90520]: 2026-02-02 17:21:33.774989833 +0000 UTC m=+0.119486586 container start 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:33 np0005605476 podman[90520]: 2026-02-02 17:21:33.778105117 +0000 UTC m=+0.122601920 container attach 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:21:33 np0005605476 podman[90557]: 2026-02-02 17:21:33.922617139 +0000 UTC m=+0.046989042 container create e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:33 np0005605476 systemd[1]: Started libpod-conmon-e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f.scope.
Feb  2 12:21:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:33 np0005605476 podman[90557]: 2026-02-02 17:21:33.901202984 +0000 UTC m=+0.025574967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:34 np0005605476 podman[90557]: 2026-02-02 17:21:34.010820352 +0000 UTC m=+0.135192255 container init e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:34 np0005605476 podman[90557]: 2026-02-02 17:21:34.020298733 +0000 UTC m=+0.144670636 container start e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:34 np0005605476 stupefied_panini[90590]: 167 167
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90557]: 2026-02-02 17:21:34.024856921 +0000 UTC m=+0.149228784 container attach e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:21:34 np0005605476 podman[90557]: 2026-02-02 17:21:34.025840077 +0000 UTC m=+0.150211980 container died e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:21:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a9745d6a17dc8ce887f2355b996114ae4a5cd7b1caba58e35f65cd386277cd34-merged.mount: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90557]: 2026-02-02 17:21:34.063448118 +0000 UTC m=+0.187819981 container remove e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-conmon-e17cb6dec49d74eaa1c1dae4d6f2b385de117f04fb71676b3aa6ebc793de8a4f.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.209588748 +0000 UTC m=+0.050286468 container create 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:21:34 np0005605476 systemd[1]: Started libpod-conmon-135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528.scope.
Feb  2 12:21:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d4d41d9dde49c865c53f37a55433d2b025699ca606675dcd09cb25e73384eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d4d41d9dde49c865c53f37a55433d2b025699ca606675dcd09cb25e73384eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d4d41d9dde49c865c53f37a55433d2b025699ca606675dcd09cb25e73384eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d4d41d9dde49c865c53f37a55433d2b025699ca606675dcd09cb25e73384eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.185790613 +0000 UTC m=+0.026488373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 12:21:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323423589' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 12:21:34 np0005605476 happy_black[90538]: 
Feb  2 12:21:34 np0005605476 happy_black[90538]: {"fsid":"eb48d0ef-3496-563c-b73d-661fb962013e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":75,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1770052888,"num_in_osds":3,"osd_in_since":1770052869,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":1341775872,"bytes_avail":63070150656,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-02-02T17:20:17:007731+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T17:20:17.009633+0000","services":{}},"progress_events":{}}
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90520]: 2026-02-02 17:21:34.313809044 +0000 UTC m=+0.658305797 container died 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.314038468 +0000 UTC m=+0.154736218 container init 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.321370963 +0000 UTC m=+0.162068683 container start 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.382314891 +0000 UTC m=+0.223012651 container attach 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7ff372d643618605a183562b3570d1671cf158293f076ed908aa0fb0ab7392ea-merged.mount: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90520]: 2026-02-02 17:21:34.407892437 +0000 UTC m=+0.752389190 container remove 757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30 (image=quay.io/ceph/ceph:v20, name=happy_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-conmon-757789def4dc0546212ad03a0ea09c13d3627b4437d6e10358923d514aff0c30.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]: {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    "0": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "devices": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "/dev/loop3"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            ],
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_name": "ceph_lv0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_size": "21470642176",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "name": "ceph_lv0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "tags": {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.crush_device_class": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.encrypted": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_id": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.vdo": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.with_tpm": "0"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            },
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "vg_name": "ceph_vg0"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        }
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    ],
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    "1": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "devices": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "/dev/loop4"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            ],
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_name": "ceph_lv1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_size": "21470642176",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "name": "ceph_lv1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "tags": {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.crush_device_class": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.encrypted": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_id": "1",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.vdo": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.with_tpm": "0"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            },
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "vg_name": "ceph_vg1"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        }
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    ],
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    "2": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "devices": [
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "/dev/loop5"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            ],
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_name": "ceph_lv2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_size": "21470642176",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "name": "ceph_lv2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "tags": {
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.crush_device_class": "",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.encrypted": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osd_id": "2",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.vdo": "0",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:                "ceph.with_tpm": "0"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            },
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "type": "block",
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:            "vg_name": "ceph_vg2"
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:        }
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]:    ]
Feb  2 12:21:34 np0005605476 lucid_yonath[90631]: }
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.648359164 +0000 UTC m=+0.489056884 container died 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a1d4d41d9dde49c865c53f37a55433d2b025699ca606675dcd09cb25e73384eb-merged.mount: Deactivated successfully.
Feb  2 12:21:34 np0005605476 podman[90614]: 2026-02-02 17:21:34.690842058 +0000 UTC m=+0.531539758 container remove 135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:21:34 np0005605476 systemd[1]: libpod-conmon-135f4ed854e4690cd7ba4b8a97f81a3aac95ee9b9d348d7dc3675fb67ab4f528.scope: Deactivated successfully.
Feb  2 12:21:34 np0005605476 python3[90691]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:34 np0005605476 podman[90742]: 2026-02-02 17:21:34.891435916 +0000 UTC m=+0.035656289 container create b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:21:34 np0005605476 systemd[1]: Started libpod-conmon-b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997.scope.
Feb  2 12:21:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a3ebb5d17eb85da81e71a7b213d772159bd82a5924db4c7e10ec2be98ad65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a3ebb5d17eb85da81e71a7b213d772159bd82a5924db4c7e10ec2be98ad65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:34 np0005605476 podman[90742]: 2026-02-02 17:21:34.954544221 +0000 UTC m=+0.098764634 container init b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:34 np0005605476 podman[90742]: 2026-02-02 17:21:34.959665448 +0000 UTC m=+0.103885841 container start b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:21:34 np0005605476 podman[90742]: 2026-02-02 17:21:34.962922654 +0000 UTC m=+0.107143037 container attach b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:21:34 np0005605476 podman[90742]: 2026-02-02 17:21:34.877702612 +0000 UTC m=+0.021923005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.061840069 +0000 UTC m=+0.034271275 container create d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:35 np0005605476 systemd[1]: Started libpod-conmon-d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5.scope.
Feb  2 12:21:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.118916601 +0000 UTC m=+0.091347827 container init d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.12237705 +0000 UTC m=+0.094808256 container start d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:35 np0005605476 optimistic_galileo[90809]: 167 167
Feb  2 12:21:35 np0005605476 systemd[1]: libpod-d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5.scope: Deactivated successfully.
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.126186115 +0000 UTC m=+0.098617321 container attach d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.126512651 +0000 UTC m=+0.098943857 container died d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.04604746 +0000 UTC m=+0.018478686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c44611d5c3f916444e99f0812e37209a6356074f6110d85bc61973e59947b31c-merged.mount: Deactivated successfully.
Feb  2 12:21:35 np0005605476 podman[90774]: 2026-02-02 17:21:35.161892414 +0000 UTC m=+0.134323650 container remove d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:21:35 np0005605476 systemd[1]: libpod-conmon-d40726d6ecea2a48485a0bd5f640b607398271c9bd13bd8edfcd49825dc833f5.scope: Deactivated successfully.
Feb  2 12:21:35 np0005605476 podman[90834]: 2026-02-02 17:21:35.298370818 +0000 UTC m=+0.043987879 container create dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:21:35 np0005605476 systemd[1]: Started libpod-conmon-dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7.scope.
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755765852' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b384c846fcbf2d65fcb91adb78bf3455de827a64ba7058ca2df6bfac193b29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b384c846fcbf2d65fcb91adb78bf3455de827a64ba7058ca2df6bfac193b29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b384c846fcbf2d65fcb91adb78bf3455de827a64ba7058ca2df6bfac193b29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b384c846fcbf2d65fcb91adb78bf3455de827a64ba7058ca2df6bfac193b29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 podman[90834]: 2026-02-02 17:21:35.279601618 +0000 UTC m=+0.025218729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:35 np0005605476 podman[90834]: 2026-02-02 17:21:35.383913876 +0000 UTC m=+0.129530957 container init dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:35 np0005605476 podman[90834]: 2026-02-02 17:21:35.38889289 +0000 UTC m=+0.134509991 container start dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:21:35 np0005605476 podman[90834]: 2026-02-02 17:21:35.39239693 +0000 UTC m=+0.138014021 container attach dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755765852' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb  2 12:21:35 np0005605476 funny_engelbart[90757]: pool 'vms' created
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb  2 12:21:35 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1755765852' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:35 np0005605476 systemd[1]: libpod-b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997.scope: Deactivated successfully.
Feb  2 12:21:35 np0005605476 podman[90742]: 2026-02-02 17:21:35.449070246 +0000 UTC m=+0.593290629 container died b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-206a3ebb5d17eb85da81e71a7b213d772159bd82a5924db4c7e10ec2be98ad65-merged.mount: Deactivated successfully.
Feb  2 12:21:35 np0005605476 podman[90742]: 2026-02-02 17:21:35.48045605 +0000 UTC m=+0.624676423 container remove b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997 (image=quay.io/ceph/ceph:v20, name=funny_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:35 np0005605476 systemd[1]: libpod-conmon-b84ae1af6297aace94f4ea0b47d57c3062bc5b7bf8ed9feed34797f490633997.scope: Deactivated successfully.
Feb  2 12:21:35 np0005605476 python3[90905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:35 np0005605476 podman[90917]: 2026-02-02 17:21:35.776885131 +0000 UTC m=+0.040721695 container create c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:35 np0005605476 systemd[1]: Started libpod-conmon-c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39.scope.
Feb  2 12:21:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f23df567737d5fa7515e50d66c1f34af9ffb33baced899d108b4ca70089b3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f23df567737d5fa7515e50d66c1f34af9ffb33baced899d108b4ca70089b3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:35 np0005605476 podman[90917]: 2026-02-02 17:21:35.84843329 +0000 UTC m=+0.112269874 container init c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:35 np0005605476 podman[90917]: 2026-02-02 17:21:35.759407293 +0000 UTC m=+0.023243887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:35 np0005605476 podman[90917]: 2026-02-02 17:21:35.854728107 +0000 UTC m=+0.118564681 container start c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:21:35 np0005605476 podman[90917]: 2026-02-02 17:21:35.858583733 +0000 UTC m=+0.122420297 container attach c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:21:35 np0005605476 lvm[91004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:35 np0005605476 lvm[91004]: VG ceph_vg0 finished
Feb  2 12:21:35 np0005605476 lvm[91007]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:35 np0005605476 lvm[91007]: VG ceph_vg1 finished
Feb  2 12:21:36 np0005605476 lvm[91009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:36 np0005605476 lvm[91009]: VG ceph_vg2 finished
Feb  2 12:21:36 np0005605476 lvm[91010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:36 np0005605476 lvm[91010]: VG ceph_vg0 finished
Feb  2 12:21:36 np0005605476 lvm[91011]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:36 np0005605476 lvm[91011]: VG ceph_vg1 finished
Feb  2 12:21:36 np0005605476 trusting_leakey[90850]: {}
Feb  2 12:21:36 np0005605476 systemd[1]: libpod-dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7.scope: Deactivated successfully.
Feb  2 12:21:36 np0005605476 systemd[1]: libpod-dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7.scope: Consumed 1.072s CPU time.
Feb  2 12:21:36 np0005605476 podman[90834]: 2026-02-02 17:21:36.14187637 +0000 UTC m=+0.887493431 container died dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:21:36 np0005605476 systemd[1]: var-lib-containers-storage-overlay-54b384c846fcbf2d65fcb91adb78bf3455de827a64ba7058ca2df6bfac193b29-merged.mount: Deactivated successfully.
Feb  2 12:21:36 np0005605476 podman[90834]: 2026-02-02 17:21:36.174384484 +0000 UTC m=+0.920001545 container remove dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:21:36 np0005605476 systemd[1]: libpod-conmon-dd13df835bb91f4f0f1f2133ad93093b91d7ed8b30f5b180e0d90eee4bd7fad7.scope: Deactivated successfully.
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3139828194' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:36 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1755765852' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3139828194' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:36 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v38: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:21:36
Feb  2 12:21:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:21:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3139828194' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb  2 12:21:37 np0005605476 kind_jang[90955]: pool 'volumes' created
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb  2 12:21:37 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:37 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:37 np0005605476 systemd[1]: libpod-c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39.scope: Deactivated successfully.
Feb  2 12:21:37 np0005605476 podman[90917]: 2026-02-02 17:21:37.247478566 +0000 UTC m=+1.511315130 container died c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:21:37 np0005605476 systemd[1]: var-lib-containers-storage-overlay-42f23df567737d5fa7515e50d66c1f34af9ffb33baced899d108b4ca70089b3f-merged.mount: Deactivated successfully.
Feb  2 12:21:37 np0005605476 podman[90917]: 2026-02-02 17:21:37.285201469 +0000 UTC m=+1.549038033 container remove c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39 (image=quay.io/ceph/ceph:v20, name=kind_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:21:37 np0005605476 systemd[1]: libpod-conmon-c473f2522338f1efcf93af0f2f2ed97efeec8bdf0fee418aeba90fcbd9835e39.scope: Deactivated successfully.
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3139828194' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:21:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:21:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:21:37 np0005605476 python3[91092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:37 np0005605476 podman[91093]: 2026-02-02 17:21:37.578391015 +0000 UTC m=+0.041031840 container create e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:37 np0005605476 systemd[1]: Started libpod-conmon-e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0.scope.
Feb  2 12:21:37 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc68eff1745b3b32b494baa16ca904d678346205420cbb3d6ce6cf7b72c30872/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc68eff1745b3b32b494baa16ca904d678346205420cbb3d6ce6cf7b72c30872/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:37 np0005605476 podman[91093]: 2026-02-02 17:21:37.647065995 +0000 UTC m=+0.109706830 container init e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:21:37 np0005605476 podman[91093]: 2026-02-02 17:21:37.653525925 +0000 UTC m=+0.116166790 container start e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:21:37 np0005605476 podman[91093]: 2026-02-02 17:21:37.563534672 +0000 UTC m=+0.026175587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:37 np0005605476 podman[91093]: 2026-02-02 17:21:37.657297929 +0000 UTC m=+0.119938774 container attach e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637986961' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637986961' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb  2 12:21:38 np0005605476 busy_herschel[91108]: pool 'backups' created
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb  2 12:21:38 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev c23f51e5-ca2e-44a0-9585-d9db0720145b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:21:38 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:38 np0005605476 systemd[1]: libpod-e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0.scope: Deactivated successfully.
Feb  2 12:21:38 np0005605476 podman[91093]: 2026-02-02 17:21:38.251740548 +0000 UTC m=+0.714381403 container died e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bc68eff1745b3b32b494baa16ca904d678346205420cbb3d6ce6cf7b72c30872-merged.mount: Deactivated successfully.
Feb  2 12:21:38 np0005605476 podman[91093]: 2026-02-02 17:21:38.289212236 +0000 UTC m=+0.751853101 container remove e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0 (image=quay.io/ceph/ceph:v20, name=busy_herschel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb  2 12:21:38 np0005605476 systemd[1]: libpod-conmon-e4ef9bc41f34e150239575e7a41390734274a060049b33367e0e697157585ff0.scope: Deactivated successfully.
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1637986961' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1637986961' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:21:38 np0005605476 python3[91172]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:38 np0005605476 podman[91173]: 2026-02-02 17:21:38.575337021 +0000 UTC m=+0.034954146 container create 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:21:38 np0005605476 systemd[1]: Started libpod-conmon-78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18.scope.
Feb  2 12:21:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2774040ddb76790d8ce12a1e718e3670acb362d62c341a057790a046286b4b81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2774040ddb76790d8ce12a1e718e3670acb362d62c341a057790a046286b4b81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:38 np0005605476 podman[91173]: 2026-02-02 17:21:38.563280076 +0000 UTC m=+0.022897221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v41: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:38 np0005605476 podman[91173]: 2026-02-02 17:21:38.677470501 +0000 UTC m=+0.137087656 container init 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:21:38 np0005605476 podman[91173]: 2026-02-02 17:21:38.682992636 +0000 UTC m=+0.142609761 container start 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:21:38 np0005605476 podman[91173]: 2026-02-02 17:21:38.687841908 +0000 UTC m=+0.147459033 container attach 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:21:38 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1922849901' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1922849901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb  2 12:21:39 np0005605476 quirky_curie[91188]: pool 'images' created
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb  2 12:21:39 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 4058ed09-1837-4580-904b-8b845b756beb (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 12:21:39 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev c23f51e5-ca2e-44a0-9585-d9db0720145b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 12:21:39 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event c23f51e5-ca2e-44a0-9585-d9db0720145b (PG autoscaler increasing pool 2 PGs from 1 to 32) in 1 seconds
Feb  2 12:21:39 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 4058ed09-1837-4580-904b-8b845b756beb (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 12:21:39 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 4058ed09-1837-4580-904b-8b845b756beb (PG autoscaler increasing pool 3 PGs from 1 to 32) in 0 seconds
Feb  2 12:21:39 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:39 np0005605476 systemd[1]: libpod-78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18.scope: Deactivated successfully.
Feb  2 12:21:39 np0005605476 podman[91173]: 2026-02-02 17:21:39.262296969 +0000 UTC m=+0.721914094 container died 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:21:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2774040ddb76790d8ce12a1e718e3670acb362d62c341a057790a046286b4b81-merged.mount: Deactivated successfully.
Feb  2 12:21:39 np0005605476 podman[91173]: 2026-02-02 17:21:39.294125517 +0000 UTC m=+0.753742632 container remove 78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18 (image=quay.io/ceph/ceph:v20, name=quirky_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:39 np0005605476 systemd[1]: libpod-conmon-78b68fd76e2e16632f5edfe8f978679351cbed9736acc959c569f35615fd2e18.scope: Deactivated successfully.
Feb  2 12:21:39 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1922849901' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:21:39 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1922849901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:39 np0005605476 python3[91252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:39 np0005605476 podman[91253]: 2026-02-02 17:21:39.633649464 +0000 UTC m=+0.044938928 container create 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:39 np0005605476 systemd[1]: Started libpod-conmon-07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d.scope.
Feb  2 12:21:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa641feb8ae21e5d3933c93818d290114f8814c0462bf2fcdd009337a7853dab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa641feb8ae21e5d3933c93818d290114f8814c0462bf2fcdd009337a7853dab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:39 np0005605476 podman[91253]: 2026-02-02 17:21:39.683707576 +0000 UTC m=+0.094997050 container init 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:21:39 np0005605476 podman[91253]: 2026-02-02 17:21:39.689028937 +0000 UTC m=+0.100318421 container start 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:39 np0005605476 podman[91253]: 2026-02-02 17:21:39.692244347 +0000 UTC m=+0.103533831 container attach 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:21:39 np0005605476 podman[91253]: 2026-02-02 17:21:39.619392432 +0000 UTC m=+0.030681926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1461459890' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1461459890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb  2 12:21:40 np0005605476 sweet_mayer[91268]: pool 'cephfs.cephfs.meta' created
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb  2 12:21:40 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:40 np0005605476 systemd[1]: libpod-07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d.scope: Deactivated successfully.
Feb  2 12:21:40 np0005605476 podman[91253]: 2026-02-02 17:21:40.25988075 +0000 UTC m=+0.671170214 container died 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-aa641feb8ae21e5d3933c93818d290114f8814c0462bf2fcdd009337a7853dab-merged.mount: Deactivated successfully.
Feb  2 12:21:40 np0005605476 podman[91253]: 2026-02-02 17:21:40.294817645 +0000 UTC m=+0.706107119 container remove 07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d (image=quay.io/ceph/ceph:v20, name=sweet_mayer, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:40 np0005605476 systemd[1]: libpod-conmon-07d263acb8c175bcbeef41417c8a97db62a70b692629eb5ef9f1091a90aefd4d.scope: Deactivated successfully.
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1461459890' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1461459890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:40 np0005605476 python3[91331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:40 np0005605476 podman[91332]: 2026-02-02 17:21:40.637529943 +0000 UTC m=+0.042547921 container create c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v44: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:21:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:21:40 np0005605476 systemd[1]: Started libpod-conmon-c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a.scope.
Feb  2 12:21:40 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcefa7b23863db0cad1b9e1b4ef681d6391ea03ce91a6102a4608c4aa553cf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcefa7b23863db0cad1b9e1b4ef681d6391ea03ce91a6102a4608c4aa553cf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:40 np0005605476 podman[91332]: 2026-02-02 17:21:40.705318945 +0000 UTC m=+0.110336693 container init c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:21:40 np0005605476 podman[91332]: 2026-02-02 17:21:40.709704889 +0000 UTC m=+0.114722637 container start c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:40 np0005605476 podman[91332]: 2026-02-02 17:21:40.712796076 +0000 UTC m=+0.117813844 container attach c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:40 np0005605476 podman[91332]: 2026-02-02 17:21:40.6200618 +0000 UTC m=+0.025079588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:40 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/338071467' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/338071467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb  2 12:21:41 np0005605476 frosty_haslett[91348]: pool 'cephfs.cephfs.data' created
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb  2 12:21:41 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:41 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=22 pruub=12.986454010s) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active pruub 34.655525208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:41 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=22 pruub=12.986454010s) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown pruub 34.655525208s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:41 np0005605476 systemd[1]: libpod-c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a.scope: Deactivated successfully.
Feb  2 12:21:41 np0005605476 podman[91332]: 2026-02-02 17:21:41.273245616 +0000 UTC m=+0.678263364 container died c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:41 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8fcefa7b23863db0cad1b9e1b4ef681d6391ea03ce91a6102a4608c4aa553cf9-merged.mount: Deactivated successfully.
Feb  2 12:21:41 np0005605476 podman[91332]: 2026-02-02 17:21:41.310529598 +0000 UTC m=+0.715547366 container remove c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a (image=quay.io/ceph/ceph:v20, name=frosty_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:41 np0005605476 systemd[1]: libpod-conmon-c99c7720867789c2fab04edb0639d3e911ef83429e8c2c646cfd3a010a36d41a.scope: Deactivated successfully.
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/338071467' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:21:41 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/338071467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 12:21:41 np0005605476 python3[91412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:41 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22 pruub=11.627800941s) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active pruub 29.968048096s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:41 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22 pruub=11.627800941s) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown pruub 29.968048096s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:41 np0005605476 podman[91413]: 2026-02-02 17:21:41.631756509 +0000 UTC m=+0.049597520 container create dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:41 np0005605476 systemd[1]: Started libpod-conmon-dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2.scope.
Feb  2 12:21:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4429613a833644429ce29b45692bedf5366ca6d5ee257b9caa33b64f793229c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4429613a833644429ce29b45692bedf5366ca6d5ee257b9caa33b64f793229c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:41 np0005605476 podman[91413]: 2026-02-02 17:21:41.695210739 +0000 UTC m=+0.113051770 container init dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:41 np0005605476 podman[91413]: 2026-02-02 17:21:41.699382217 +0000 UTC m=+0.117223208 container start dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:21:41 np0005605476 podman[91413]: 2026-02-02 17:21:41.606841416 +0000 UTC m=+0.024682477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:41 np0005605476 podman[91413]: 2026-02-02 17:21:41.702452663 +0000 UTC m=+0.120293654 container attach dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1737075562' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1737075562' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb  2 12:21:42 np0005605476 elegant_poitras[91429]: enabled application 'rbd' on pool 'vms'
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1f( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1d( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1e( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1c( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.a( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.8( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.6( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.9( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.5( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.4( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.3( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.2( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.7( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.c( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.d( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.b( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.e( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.10( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.f( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.11( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.12( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.13( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.14( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.15( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.16( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.17( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.19( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1a( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.18( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1b( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=18/19 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1e( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=22/23 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.e( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.10( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.14( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 23 pg[2.12( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=22/23 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=18/18 les/c/f=19/19/0 sis=22) [1] r=0 lpr=22 pi=[18,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:42 np0005605476 systemd[1]: libpod-dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2.scope: Deactivated successfully.
Feb  2 12:21:42 np0005605476 podman[91413]: 2026-02-02 17:21:42.283871324 +0000 UTC m=+0.701712335 container died dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:42 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c4429613a833644429ce29b45692bedf5366ca6d5ee257b9caa33b64f793229c-merged.mount: Deactivated successfully.
Feb  2 12:21:42 np0005605476 podman[91413]: 2026-02-02 17:21:42.312898633 +0000 UTC m=+0.730739614 container remove dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2 (image=quay.io/ceph/ceph:v20, name=elegant_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:21:42 np0005605476 systemd[1]: libpod-conmon-dec44d188288d3fae79f9ae9557c268905231d072e4ccd881bd67c9e579a2ef2.scope: Deactivated successfully.
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1737075562' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1737075562' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 12:21:42 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 5 completed events
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:21:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb  2 12:21:42 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb  2 12:21:42 np0005605476 python3[91492]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:42 np0005605476 podman[91493]: 2026-02-02 17:21:42.623431562 +0000 UTC m=+0.038890888 container create b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:42 np0005605476 systemd[1]: Started libpod-conmon-b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca.scope.
Feb  2 12:21:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v47: 69 pgs: 4 active+clean, 65 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8018e0d374b1973417b3d0755eeaf7090258e14adb8d1d13f2b10b1b08f9480d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8018e0d374b1973417b3d0755eeaf7090258e14adb8d1d13f2b10b1b08f9480d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:42 np0005605476 podman[91493]: 2026-02-02 17:21:42.676821188 +0000 UTC m=+0.092280524 container init b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:21:42 np0005605476 podman[91493]: 2026-02-02 17:21:42.681463659 +0000 UTC m=+0.096922985 container start b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:42 np0005605476 podman[91493]: 2026-02-02 17:21:42.684251538 +0000 UTC m=+0.099710864 container attach b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:42 np0005605476 podman[91493]: 2026-02-02 17:21:42.606364721 +0000 UTC m=+0.021824087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/708320094' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/708320094' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb  2 12:21:43 np0005605476 exciting_chebyshev[91508]: enabled application 'rbd' on pool 'volumes'
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb  2 12:21:43 np0005605476 systemd[1]: libpod-b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca.scope: Deactivated successfully.
Feb  2 12:21:43 np0005605476 podman[91493]: 2026-02-02 17:21:43.2809331 +0000 UTC m=+0.696392426 container died b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:21:43 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8018e0d374b1973417b3d0755eeaf7090258e14adb8d1d13f2b10b1b08f9480d-merged.mount: Deactivated successfully.
Feb  2 12:21:43 np0005605476 podman[91493]: 2026-02-02 17:21:43.31923814 +0000 UTC m=+0.734697476 container remove b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca (image=quay.io/ceph/ceph:v20, name=exciting_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:21:43 np0005605476 systemd[1]: libpod-conmon-b57ccc0e4b1226ab5fca3f4b4e95d93edfbca27b3edec8abe5474a847d94fcca.scope: Deactivated successfully.
Feb  2 12:21:43 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb  2 12:21:43 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/708320094' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/708320094' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 12:21:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb  2 12:21:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb  2 12:21:43 np0005605476 python3[91568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:43 np0005605476 podman[91569]: 2026-02-02 17:21:43.62942513 +0000 UTC m=+0.032429285 container create 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:21:43 np0005605476 systemd[1]: Started libpod-conmon-624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570.scope.
Feb  2 12:21:43 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:43 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50605df56b4a901cdf443ef7afc9c69b7d0a4a85ad97c1a8790aa760028c24d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:43 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50605df56b4a901cdf443ef7afc9c69b7d0a4a85ad97c1a8790aa760028c24d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:43 np0005605476 podman[91569]: 2026-02-02 17:21:43.706316529 +0000 UTC m=+0.109320674 container init 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:21:43 np0005605476 podman[91569]: 2026-02-02 17:21:43.613976165 +0000 UTC m=+0.016980320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:43 np0005605476 podman[91569]: 2026-02-02 17:21:43.712775042 +0000 UTC m=+0.115779187 container start 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:43 np0005605476 podman[91569]: 2026-02-02 17:21:43.71733314 +0000 UTC m=+0.120337335 container attach 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:21:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3624089574' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  2 12:21:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb  2 12:21:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3624089574' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3624089574' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb  2 12:21:44 np0005605476 intelligent_burnell[91584]: enabled application 'rbd' on pool 'backups'
Feb  2 12:21:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb  2 12:21:44 np0005605476 systemd[1]: libpod-624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570.scope: Deactivated successfully.
Feb  2 12:21:44 np0005605476 podman[91569]: 2026-02-02 17:21:44.561307888 +0000 UTC m=+0.964312023 container died 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:21:44 np0005605476 systemd[1]: var-lib-containers-storage-overlay-50605df56b4a901cdf443ef7afc9c69b7d0a4a85ad97c1a8790aa760028c24d1-merged.mount: Deactivated successfully.
Feb  2 12:21:44 np0005605476 podman[91569]: 2026-02-02 17:21:44.598162518 +0000 UTC m=+1.001166663 container remove 624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570 (image=quay.io/ceph/ceph:v20, name=intelligent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:21:44 np0005605476 systemd[1]: libpod-conmon-624e41b8bee2402de3910fb5a4383299b37f065d0d291937e46381d248d4f570.scope: Deactivated successfully.
Feb  2 12:21:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v50: 69 pgs: 68 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:44 np0005605476 python3[91647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:44 np0005605476 podman[91648]: 2026-02-02 17:21:44.863157383 +0000 UTC m=+0.035863433 container create 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:21:44 np0005605476 systemd[1]: Started libpod-conmon-9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a.scope.
Feb  2 12:21:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6343ef8b96d2b2f3ea52869a407c43926862dcd870ec7839dc06e1a1c35e19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6343ef8b96d2b2f3ea52869a407c43926862dcd870ec7839dc06e1a1c35e19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:44 np0005605476 podman[91648]: 2026-02-02 17:21:44.917692051 +0000 UTC m=+0.090398141 container init 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:44 np0005605476 podman[91648]: 2026-02-02 17:21:44.922692362 +0000 UTC m=+0.095398412 container start 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:44 np0005605476 podman[91648]: 2026-02-02 17:21:44.926239342 +0000 UTC m=+0.098945442 container attach 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:44 np0005605476 podman[91648]: 2026-02-02 17:21:44.848009046 +0000 UTC m=+0.020715106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2203480788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  2 12:21:45 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb  2 12:21:45 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb  2 12:21:45 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb  2 12:21:45 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2203480788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb  2 12:21:45 np0005605476 pedantic_euclid[91663]: enabled application 'rbd' on pool 'images'
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3624089574' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2203480788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  2 12:21:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb  2 12:21:45 np0005605476 systemd[1]: libpod-9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a.scope: Deactivated successfully.
Feb  2 12:21:45 np0005605476 podman[91648]: 2026-02-02 17:21:45.565870716 +0000 UTC m=+0.738576766 container died 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6b6343ef8b96d2b2f3ea52869a407c43926862dcd870ec7839dc06e1a1c35e19-merged.mount: Deactivated successfully.
Feb  2 12:21:45 np0005605476 podman[91648]: 2026-02-02 17:21:45.593832965 +0000 UTC m=+0.766539005 container remove 9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a (image=quay.io/ceph/ceph:v20, name=pedantic_euclid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:45 np0005605476 systemd[1]: libpod-conmon-9f7fb4b24463a05849bc8d29661d87853c8ce07933a1e663bbdece88515fee8a.scope: Deactivated successfully.
Feb  2 12:21:45 np0005605476 python3[91725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:45 np0005605476 podman[91726]: 2026-02-02 17:21:45.896244635 +0000 UTC m=+0.052938074 container create 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:45 np0005605476 systemd[1]: Started libpod-conmon-669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617.scope.
Feb  2 12:21:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb501ea065fbf78a74fb0a8d3e2a8a9059a23e8a32feeebed151713c7e1c20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb501ea065fbf78a74fb0a8d3e2a8a9059a23e8a32feeebed151713c7e1c20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:45 np0005605476 podman[91726]: 2026-02-02 17:21:45.875633634 +0000 UTC m=+0.032327093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:45 np0005605476 podman[91726]: 2026-02-02 17:21:45.976696845 +0000 UTC m=+0.133390294 container init 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:45 np0005605476 podman[91726]: 2026-02-02 17:21:45.980799211 +0000 UTC m=+0.137492650 container start 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:21:45 np0005605476 podman[91726]: 2026-02-02 17:21:45.984407632 +0000 UTC m=+0.141101131 container attach 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3643565376' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2203480788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3643565376' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3643565376' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb  2 12:21:46 np0005605476 nervous_shannon[91741]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb  2 12:21:46 np0005605476 systemd[1]: libpod-669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617.scope: Deactivated successfully.
Feb  2 12:21:46 np0005605476 podman[91726]: 2026-02-02 17:21:46.577983926 +0000 UTC m=+0.734677365 container died 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2cbb501ea065fbf78a74fb0a8d3e2a8a9059a23e8a32feeebed151713c7e1c20-merged.mount: Deactivated successfully.
Feb  2 12:21:46 np0005605476 podman[91726]: 2026-02-02 17:21:46.612453818 +0000 UTC m=+0.769147267 container remove 669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617 (image=quay.io/ceph/ceph:v20, name=nervous_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:21:46 np0005605476 systemd[1]: libpod-conmon-669bd565de9a1e9a1f11d9efa10693debafcaa2986f407ec4d2406be752af617.scope: Deactivated successfully.
Feb  2 12:21:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v53: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:21:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:21:46 np0005605476 python3[91802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:46 np0005605476 podman[91803]: 2026-02-02 17:21:46.884370909 +0000 UTC m=+0.030242935 container create 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:21:46 np0005605476 systemd[1]: Started libpod-conmon-8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1.scope.
Feb  2 12:21:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979af8af0fc5da2e8da0d1816aecc15b5651b3d2ecce4ca2b99adc869b399abd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979af8af0fc5da2e8da0d1816aecc15b5651b3d2ecce4ca2b99adc869b399abd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:46 np0005605476 podman[91803]: 2026-02-02 17:21:46.934271936 +0000 UTC m=+0.080143972 container init 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:21:46 np0005605476 podman[91803]: 2026-02-02 17:21:46.939124903 +0000 UTC m=+0.084996929 container start 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:21:46 np0005605476 podman[91803]: 2026-02-02 17:21:46.941751877 +0000 UTC m=+0.087623923 container attach 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:21:46 np0005605476 podman[91803]: 2026-02-02 17:21:46.872486473 +0000 UTC m=+0.018358509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803032827' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803032827' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb  2 12:21:47 np0005605476 gifted_williamson[91818]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700962067s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682895660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700903893s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682895660s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.697318077s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.679351807s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700112343s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682231903s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.697247505s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.679351807s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700079918s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682231903s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700699806s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682945251s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700677872s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682945251s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700604439s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682918549s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700654030s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682975769s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700584412s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682918549s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700629234s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682975769s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700654030s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683063507s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700632095s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683063507s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700494766s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682945251s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700483322s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682945251s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700552940s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683040619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700531006s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683040619s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700540543s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683105469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700495720s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683105469s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700454712s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683139801s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700470924s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683166504s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700441360s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683139801s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700449944s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683166504s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700394630s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683155060s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700369835s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683155060s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700897217s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683769226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700858116s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683769226s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700776100s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683753967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700753212s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683753967s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700756073s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683799744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700736046s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683799744s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700859070s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683971405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700835228s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683971405s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700822830s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.684001923s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.700803757s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.684001923s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.698995590s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682235718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.698986053s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.682289124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.698963165s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682289124s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.698936462s) [2] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.682235718s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.699652672s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 38.683189392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=22/23 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.699595451s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 38.683189392s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3643565376' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:21:47 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/803032827' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.18( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.7( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.8( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[3.1e( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.683822632s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 34.995258331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.683918953s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 34.995395660s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.683862686s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 34.995361328s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.683890343s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 34.995395660s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.689511299s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.001033783s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.683826447s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 34.995361328s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.689477921s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.001033783s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688481331s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000148773s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688467026s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000148773s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688091278s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000152588s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688315392s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000392914s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688075066s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000152588s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688292503s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000392914s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.687972069s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000160217s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688095093s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000289917s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=0/0 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688082695s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000289917s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.687953949s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000160217s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688097954s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000335693s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688081741s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000335693s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688023567s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000358582s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688012123s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000377655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688008308s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000358582s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.687994957s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000377655s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688037872s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000457764s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688024521s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000457764s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688130379s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000656128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688323975s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000869751s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688110352s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000656128s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688309669s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000869751s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688168526s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000770569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688151360s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000770569s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688220978s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000938416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688205719s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000938416s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688238144s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000995636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688220024s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000995636s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688162804s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000976562s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688302040s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.001136780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688098907s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.000953674s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688130379s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000976562s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688082695s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.000953674s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688257217s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.001136780s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688119888s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.001174927s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.688102722s) [1] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.001174927s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.687834740s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 active pruub 35.001014709s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.687814713s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 35.001014709s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 systemd[1]: libpod-8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1.scope: Deactivated successfully.
Feb  2 12:21:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 28 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=10.681861877s) [0] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 unknown NOTIFY pruub 34.995258331s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 conmon[91818]: conmon 8781f63711da209d96be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1.scope/container/memory.events
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.1c( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.5( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.1d( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 podman[91803]: 2026-02-02 17:21:47.585557838 +0000 UTC m=+0.731429904 container died 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.b( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.3( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.8( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.7( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.2( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.17( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.f( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.11( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.16( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.18( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:21:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-979af8af0fc5da2e8da0d1816aecc15b5651b3d2ecce4ca2b99adc869b399abd-merged.mount: Deactivated successfully.
Feb  2 12:21:47 np0005605476 podman[91803]: 2026-02-02 17:21:47.624404934 +0000 UTC m=+0.770276960 container remove 8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1 (image=quay.io/ceph/ceph:v20, name=gifted_williamson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:21:47 np0005605476 systemd[1]: libpod-conmon-8781f63711da209d96be53e02e4a8f9a7eb12cf48a8493ad55b354fa006947e1.scope: Deactivated successfully.
Feb  2 12:21:48 np0005605476 python3[91930]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.1e( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.8( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.7( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.18( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/803032827' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 12:21:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [2] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.16( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.b( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.8( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.1c( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.1d( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.11( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.18( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.f( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[2.2( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=22/18 lis/c=22/22 les/c/f=23/23/0 sis=28) [0] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=28) [1] r=0 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:21:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v56: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:48 np0005605476 python3[92001]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052908.2511356-36688-173380129937712/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:21:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:49 np0005605476 python3[92103]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:21:49 np0005605476 python3[92178]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052909.0331366-36702-198821791324240/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d42f2cb549f850d16aad74aac703d01d14e247e7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:21:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb  2 12:21:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb  2 12:21:49 np0005605476 python3[92228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.025238568 +0000 UTC m=+0.038293671 container create 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:21:50 np0005605476 systemd[1]: Started libpod-conmon-4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d.scope.
Feb  2 12:21:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6e97d95273adfca808ff6ab412f650b242fda14d624a9440c2627d6573dc7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6e97d95273adfca808ff6ab412f650b242fda14d624a9440c2627d6573dc7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6e97d95273adfca808ff6ab412f650b242fda14d624a9440c2627d6573dc7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.08342386 +0000 UTC m=+0.096478943 container init 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.087041482 +0000 UTC m=+0.100096565 container start 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.089588244 +0000 UTC m=+0.102643327 container attach 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.008412634 +0000 UTC m=+0.021467747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 12:21:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/384791043' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:21:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/384791043' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 12:21:50 np0005605476 frosty_wu[92245]: 
Feb  2 12:21:50 np0005605476 frosty_wu[92245]: [global]
Feb  2 12:21:50 np0005605476 frosty_wu[92245]: #011fsid = eb48d0ef-3496-563c-b73d-661fb962013e
Feb  2 12:21:50 np0005605476 frosty_wu[92245]: #011mon_host = 192.168.122.100
Feb  2 12:21:50 np0005605476 frosty_wu[92245]: #011rgw_keystone_api_version = 3
Feb  2 12:21:50 np0005605476 systemd[1]: libpod-4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d.scope: Deactivated successfully.
Feb  2 12:21:50 np0005605476 conmon[92245]: conmon 4775933f8d20626f87d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d.scope/container/memory.events
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.534498774 +0000 UTC m=+0.547553867 container died 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4a6e97d95273adfca808ff6ab412f650b242fda14d624a9440c2627d6573dc7b-merged.mount: Deactivated successfully.
Feb  2 12:21:50 np0005605476 podman[92229]: 2026-02-02 17:21:50.576492049 +0000 UTC m=+0.589547132 container remove 4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d (image=quay.io/ceph/ceph:v20, name=frosty_wu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:50 np0005605476 systemd[1]: libpod-conmon-4775933f8d20626f87d5f639f8e299d73dd2b2c86de9a0be93f87fa31bda717d.scope: Deactivated successfully.
Feb  2 12:21:50 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/384791043' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 12:21:50 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/384791043' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 12:21:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v57: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:50 np0005605476 python3[92356]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:50 np0005605476 podman[92364]: 2026-02-02 17:21:50.889977972 +0000 UTC m=+0.037692464 container create 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:50 np0005605476 systemd[1]: Started libpod-conmon-587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676.scope.
Feb  2 12:21:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ad032ab0b69820f905e70b945018d09adf1bc23f0dd81982eaa5847f3d4e73/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ad032ab0b69820f905e70b945018d09adf1bc23f0dd81982eaa5847f3d4e73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ad032ab0b69820f905e70b945018d09adf1bc23f0dd81982eaa5847f3d4e73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:50 np0005605476 podman[92364]: 2026-02-02 17:21:50.971765319 +0000 UTC m=+0.119479831 container init 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:50 np0005605476 podman[92364]: 2026-02-02 17:21:50.871793599 +0000 UTC m=+0.019508111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:50 np0005605476 podman[92364]: 2026-02-02 17:21:50.977313656 +0000 UTC m=+0.125028158 container start 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:50 np0005605476 podman[92364]: 2026-02-02 17:21:50.980344581 +0000 UTC m=+0.128059093 container attach 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:51 np0005605476 podman[92417]: 2026-02-02 17:21:51.018437626 +0000 UTC m=+0.040673909 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:21:51 np0005605476 podman[92417]: 2026-02-02 17:21:51.096783976 +0000 UTC m=+0.119020239 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:21:51 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb  2 12:21:51 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1502182051' entity='client.admin' 
Feb  2 12:21:51 np0005605476 nifty_lalande[92402]: set ssl_option
Feb  2 12:21:51 np0005605476 systemd[1]: libpod-587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676.scope: Deactivated successfully.
Feb  2 12:21:51 np0005605476 podman[92364]: 2026-02-02 17:21:51.54574503 +0000 UTC m=+0.693459552 container died 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-63ad032ab0b69820f905e70b945018d09adf1bc23f0dd81982eaa5847f3d4e73-merged.mount: Deactivated successfully.
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:51 np0005605476 podman[92364]: 2026-02-02 17:21:51.583587678 +0000 UTC m=+0.731302200 container remove 587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676 (image=quay.io/ceph/ceph:v20, name=nifty_lalande, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:51 np0005605476 systemd[1]: libpod-conmon-587859b612a4045c213b2b6b74a0cc6e079f8af678bd6c0cb88d5104d5681676.scope: Deactivated successfully.
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/1502182051' entity='client.admin' 
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:51 np0005605476 python3[92674]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:51 np0005605476 podman[92682]: 2026-02-02 17:21:51.897768451 +0000 UTC m=+0.038675702 container create a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:21:51 np0005605476 systemd[1]: Started libpod-conmon-a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc.scope.
Feb  2 12:21:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d208b428248a2030a7db1b1ac2651cd995b043c24e7b428d042f578b979686/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d208b428248a2030a7db1b1ac2651cd995b043c24e7b428d042f578b979686/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d208b428248a2030a7db1b1ac2651cd995b043c24e7b428d042f578b979686/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:51 np0005605476 podman[92682]: 2026-02-02 17:21:51.880763141 +0000 UTC m=+0.021670482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:51 np0005605476 podman[92682]: 2026-02-02 17:21:51.986314868 +0000 UTC m=+0.127222159 container init a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:21:51 np0005605476 podman[92682]: 2026-02-02 17:21:51.990601509 +0000 UTC m=+0.131508770 container start a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:51 np0005605476 podman[92682]: 2026-02-02 17:21:51.993894972 +0000 UTC m=+0.134802223 container attach a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:21:52 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb  2 12:21:52 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:52 np0005605476 jolly_hoover[92706]: Scheduled rgw.rgw update...
Feb  2 12:21:52 np0005605476 systemd[1]: libpod-a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc.scope: Deactivated successfully.
Feb  2 12:21:52 np0005605476 podman[92682]: 2026-02-02 17:21:52.425430356 +0000 UTC m=+0.566337647 container died a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-75d208b428248a2030a7db1b1ac2651cd995b043c24e7b428d042f578b979686-merged.mount: Deactivated successfully.
Feb  2 12:21:52 np0005605476 podman[92682]: 2026-02-02 17:21:52.458708454 +0000 UTC m=+0.599615715 container remove a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc (image=quay.io/ceph/ceph:v20, name=jolly_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:52 np0005605476 systemd[1]: libpod-conmon-a3194ef31d8812da2f3153c9c559a752a47a49b5aefa8a190f52b112a5626abc.scope: Deactivated successfully.
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.475770326 +0000 UTC m=+0.036904383 container create 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:52 np0005605476 systemd[1]: Started libpod-conmon-441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518.scope.
Feb  2 12:21:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.534118721 +0000 UTC m=+0.095252838 container init 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.537556748 +0000 UTC m=+0.098690795 container start 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.539908945 +0000 UTC m=+0.101043082 container attach 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:21:52 np0005605476 busy_nobel[92834]: 167 167
Feb  2 12:21:52 np0005605476 systemd[1]: libpod-441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518.scope: Deactivated successfully.
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.542259331 +0000 UTC m=+0.103393378 container died 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.458118828 +0000 UTC m=+0.019252905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c5bcbbb16a06840547b9da663c8443b902683f7b1d3159f883d430ba8eeed839-merged.mount: Deactivated successfully.
Feb  2 12:21:52 np0005605476 podman[92806]: 2026-02-02 17:21:52.57590464 +0000 UTC m=+0.137038727 container remove 441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:52 np0005605476 systemd[1]: libpod-conmon-441d2ae223905ce0e961e9b52a3aa11f10ac1a69b861def21dfa81929bb82518.scope: Deactivated successfully.
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v58: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:52 np0005605476 podman[92858]: 2026-02-02 17:21:52.713978175 +0000 UTC m=+0.052209564 container create f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:21:52 np0005605476 systemd[1]: Started libpod-conmon-f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46.scope.
Feb  2 12:21:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:52 np0005605476 podman[92858]: 2026-02-02 17:21:52.692689735 +0000 UTC m=+0.030921134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:52 np0005605476 podman[92858]: 2026-02-02 17:21:52.810703014 +0000 UTC m=+0.148934463 container init f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:21:52 np0005605476 podman[92858]: 2026-02-02 17:21:52.821289072 +0000 UTC m=+0.159520461 container start f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:21:52 np0005605476 podman[92858]: 2026-02-02 17:21:52.825319506 +0000 UTC m=+0.163550875 container attach f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:21:52 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb  2 12:21:52 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb  2 12:21:53 np0005605476 thirsty_kepler[92874]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:21:53 np0005605476 thirsty_kepler[92874]: --> All data devices are unavailable
Feb  2 12:21:53 np0005605476 systemd[1]: libpod-f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46.scope: Deactivated successfully.
Feb  2 12:21:53 np0005605476 podman[92858]: 2026-02-02 17:21:53.28563295 +0000 UTC m=+0.623864329 container died f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:21:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c8031c8c184795c305925d3baa4b384009b94270f3e85cac56565cbac65affd2-merged.mount: Deactivated successfully.
Feb  2 12:21:53 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb  2 12:21:53 np0005605476 podman[92858]: 2026-02-02 17:21:53.33455459 +0000 UTC m=+0.672785979 container remove f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:21:53 np0005605476 systemd[1]: libpod-conmon-f4836529a1ad63321c764e635abb28efab854ac99abb46e04a74b62e73a79b46.scope: Deactivated successfully.
Feb  2 12:21:53 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb  2 12:21:53 np0005605476 python3[92968]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:21:53 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb  2 12:21:53 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb  2 12:21:53 np0005605476 ceph-mon[75197]: Saving service rgw.rgw spec with placement compute-0
Feb  2 12:21:53 np0005605476 python3[93105]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052913.133983-36743-190168550442457/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.71919233 +0000 UTC m=+0.044232259 container create 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:53 np0005605476 systemd[1]: Started libpod-conmon-7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728.scope.
Feb  2 12:21:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.775688974 +0000 UTC m=+0.100728923 container init 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.781511038 +0000 UTC m=+0.106550997 container start 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:21:53 np0005605476 pedantic_villani[93158]: 167 167
Feb  2 12:21:53 np0005605476 systemd[1]: libpod-7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728.scope: Deactivated successfully.
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.784959525 +0000 UTC m=+0.109999464 container attach 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:21:53 np0005605476 conmon[93158]: conmon 7c5d06b3a287ae4d0f48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728.scope/container/memory.events
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.786068957 +0000 UTC m=+0.111108906 container died 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.695528113 +0000 UTC m=+0.020568122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-226811a3a16f1aff45e09b2eec5fc64f9361130c9e0d5ef9c0501eea1b97b760-merged.mount: Deactivated successfully.
Feb  2 12:21:53 np0005605476 podman[93118]: 2026-02-02 17:21:53.814693344 +0000 UTC m=+0.139733273 container remove 7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:21:53 np0005605476 systemd[1]: libpod-conmon-7c5d06b3a287ae4d0f4851196953f5fb033a76e9c4e7fb08ab540d8292c50728.scope: Deactivated successfully.
Feb  2 12:21:53 np0005605476 podman[93183]: 2026-02-02 17:21:53.932394114 +0000 UTC m=+0.043360844 container create bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:21:53 np0005605476 systemd[1]: Started libpod-conmon-bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4.scope.
Feb  2 12:21:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e808c843d12e3bc5b32ae868d7ee7e42b8bc9bfdc004de1aaf1492a9dea28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e808c843d12e3bc5b32ae868d7ee7e42b8bc9bfdc004de1aaf1492a9dea28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e808c843d12e3bc5b32ae868d7ee7e42b8bc9bfdc004de1aaf1492a9dea28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e808c843d12e3bc5b32ae868d7ee7e42b8bc9bfdc004de1aaf1492a9dea28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:54.011513676 +0000 UTC m=+0.122480436 container init bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:53.917868245 +0000 UTC m=+0.028834995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:54.019206383 +0000 UTC m=+0.130173113 container start bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:54.022459445 +0000 UTC m=+0.133426295 container attach bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:54 np0005605476 python3[93227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.191676659 +0000 UTC m=+0.043022875 container create 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:21:54 np0005605476 systemd[1]: Started libpod-conmon-08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b.scope.
Feb  2 12:21:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e28c66fd09a57b9a709b01bcd7f091a3ab8ffb9eb6ebca1e4bd0a78d663c0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e28c66fd09a57b9a709b01bcd7f091a3ab8ffb9eb6ebca1e4bd0a78d663c0a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e28c66fd09a57b9a709b01bcd7f091a3ab8ffb9eb6ebca1e4bd0a78d663c0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.174900005 +0000 UTC m=+0.026246251 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:54 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.292645327 +0000 UTC m=+0.143991553 container init 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:54 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.299207772 +0000 UTC m=+0.150553988 container start 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.302452683 +0000 UTC m=+0.153798909 container attach 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:21:54 np0005605476 brave_napier[93218]: {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    "0": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "devices": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "/dev/loop3"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            ],
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_name": "ceph_lv0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_size": "21470642176",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "name": "ceph_lv0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "tags": {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.crush_device_class": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.encrypted": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_id": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.vdo": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.with_tpm": "0"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            },
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "vg_name": "ceph_vg0"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        }
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    ],
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    "1": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "devices": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "/dev/loop4"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            ],
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_name": "ceph_lv1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_size": "21470642176",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "name": "ceph_lv1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "tags": {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.crush_device_class": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.encrypted": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_id": "1",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.vdo": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.with_tpm": "0"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            },
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "vg_name": "ceph_vg1"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        }
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    ],
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    "2": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "devices": [
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "/dev/loop5"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            ],
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_name": "ceph_lv2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_size": "21470642176",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "name": "ceph_lv2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "tags": {
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.crush_device_class": "",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.encrypted": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osd_id": "2",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.vdo": "0",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:                "ceph.with_tpm": "0"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            },
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "type": "block",
Feb  2 12:21:54 np0005605476 brave_napier[93218]:            "vg_name": "ceph_vg2"
Feb  2 12:21:54 np0005605476 brave_napier[93218]:        }
Feb  2 12:21:54 np0005605476 brave_napier[93218]:    ]
Feb  2 12:21:54 np0005605476 brave_napier[93218]: }
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4.scope: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:54.340517197 +0000 UTC m=+0.451483947 container died bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-057e808c843d12e3bc5b32ae868d7ee7e42b8bc9bfdc004de1aaf1492a9dea28-merged.mount: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93183]: 2026-02-02 17:21:54.374875096 +0000 UTC m=+0.485841816 container remove bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_napier, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-conmon-bb05a480ee415e51b9a2eb15ee7d40e532baea360cd35baa436063f372f2edc4.scope: Deactivated successfully.
Feb  2 12:21:54 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb  2 12:21:54 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v59: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.761166943 +0000 UTC m=+0.035638356 container create 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 12:21:54 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0[75193]: 2026-02-02T17:21:54.771+0000 7f4895975640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e2 new map
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-02T17:21:54:772321+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T17:21:54.771913+0000#012modified#0112026-02-02T17:21:54.771913+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 12:21:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:54 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 12:21:54 np0005605476 systemd[1]: Started libpod-conmon-5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735.scope.
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b.scope: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.812214183 +0000 UTC m=+0.663560409 container died 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:21:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.744965276 +0000 UTC m=+0.019436669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-10e28c66fd09a57b9a709b01bcd7f091a3ab8ffb9eb6ebca1e4bd0a78d663c0a-merged.mount: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.866630048 +0000 UTC m=+0.141101451 container init 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:21:54 np0005605476 podman[93230]: 2026-02-02 17:21:54.871852946 +0000 UTC m=+0.723199162 container remove 08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b (image=quay.io/ceph/ceph:v20, name=blissful_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.873717608 +0000 UTC m=+0.148189011 container start 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.877607808 +0000 UTC m=+0.152079181 container attach 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:54 np0005605476 infallible_antonelli[93365]: 167 167
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735.scope: Deactivated successfully.
Feb  2 12:21:54 np0005605476 conmon[93365]: conmon 5183ca7e0b260adedab5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735.scope/container/memory.events
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-conmon-08159ae9d400b510d764b2d774441724e6595decb77a37dc54c97d2afabef33b.scope: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.883278558 +0000 UTC m=+0.157749931 container died 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f2be3ed80716d5ec9c5c7e29d92b587decd509867b312507759319be8c8148b3-merged.mount: Deactivated successfully.
Feb  2 12:21:54 np0005605476 podman[93347]: 2026-02-02 17:21:54.914080667 +0000 UTC m=+0.188552040 container remove 5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:21:54 np0005605476 systemd[1]: libpod-conmon-5183ca7e0b260adedab5afa5ea3788eaad84405e14b84ffd62efcb38c160e735.scope: Deactivated successfully.
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.03010589 +0000 UTC m=+0.036227323 container create 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:21:55 np0005605476 systemd[1]: Started libpod-conmon-2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502.scope.
Feb  2 12:21:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4af5f396d3028cf54a970c248cc818cb6c4bfec070d96c71e9bac248b0ac24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4af5f396d3028cf54a970c248cc818cb6c4bfec070d96c71e9bac248b0ac24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4af5f396d3028cf54a970c248cc818cb6c4bfec070d96c71e9bac248b0ac24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4af5f396d3028cf54a970c248cc818cb6c4bfec070d96c71e9bac248b0ac24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.015424366 +0000 UTC m=+0.021545819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.128867566 +0000 UTC m=+0.134989019 container init 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.13753157 +0000 UTC m=+0.143653043 container start 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.141548304 +0000 UTC m=+0.147669737 container attach 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:21:55 np0005605476 python3[93442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.223529956 +0000 UTC m=+0.030531572 container create 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:21:55 np0005605476 systemd[1]: Started libpod-conmon-08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b.scope.
Feb  2 12:21:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4496263e8b2b53c588cd60b0c1d281670a2448fa858b50b94928c5060c29b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4496263e8b2b53c588cd60b0c1d281670a2448fa858b50b94928c5060c29b8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4496263e8b2b53c588cd60b0c1d281670a2448fa858b50b94928c5060c29b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.276371927 +0000 UTC m=+0.083373573 container init 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.280797632 +0000 UTC m=+0.087799248 container start 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.283786096 +0000 UTC m=+0.090787742 container attach 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:21:55 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb  2 12:21:55 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.210141298 +0000 UTC m=+0.017142934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:55 np0005605476 lvm[93562]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:21:55 np0005605476 lvm[93562]: VG ceph_vg0 finished
Feb  2 12:21:55 np0005605476 lvm[93564]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:21:55 np0005605476 lvm[93564]: VG ceph_vg1 finished
Feb  2 12:21:55 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 12:21:55 np0005605476 ceph-mgr[75493]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:55 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 12:21:55 np0005605476 lvm[93566]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:21:55 np0005605476 lvm[93566]: VG ceph_vg2 finished
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:55 np0005605476 naughty_pare[93467]: Scheduled mds.cephfs update...
Feb  2 12:21:55 np0005605476 systemd[1]: libpod-08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b.scope: Deactivated successfully.
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.738478202 +0000 UTC m=+0.545479808 container died 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:21:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7a4496263e8b2b53c588cd60b0c1d281670a2448fa858b50b94928c5060c29b8-merged.mount: Deactivated successfully.
Feb  2 12:21:55 np0005605476 podman[93451]: 2026-02-02 17:21:55.768021986 +0000 UTC m=+0.575023602 container remove 08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b (image=quay.io/ceph/ceph:v20, name=naughty_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:21:55 np0005605476 systemd[1]: libpod-conmon-08dec8075c0d7c0be8e1aa4949f37a2ce4057664692f58a807d89fc91cd30a0b.scope: Deactivated successfully.
Feb  2 12:21:55 np0005605476 epic_mahavira[93446]: {}
Feb  2 12:21:55 np0005605476 systemd[1]: libpod-2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502.scope: Deactivated successfully.
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.820202128 +0000 UTC m=+0.826323561 container died 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0b4af5f396d3028cf54a970c248cc818cb6c4bfec070d96c71e9bac248b0ac24-merged.mount: Deactivated successfully.
Feb  2 12:21:55 np0005605476 podman[93405]: 2026-02-02 17:21:55.848452145 +0000 UTC m=+0.854573568 container remove 2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:55 np0005605476 systemd[1]: libpod-conmon-2c39cdf44d438d24c489713c49d14faddbfa1fd87cdd2ca5c2e4fe207fbf7502.scope: Deactivated successfully.
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:55 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb  2 12:21:55 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb  2 12:21:56 np0005605476 podman[93714]: 2026-02-02 17:21:56.426976124 +0000 UTC m=+0.049711713 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:56 np0005605476 podman[93812]: 2026-02-02 17:21:56.57323809 +0000 UTC m=+0.048911411 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 12:21:56 np0005605476 podman[93714]: 2026-02-02 17:21:56.577418708 +0000 UTC m=+0.200154287 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:21:56 np0005605476 python3[93811]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v61: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:56 np0005605476 python3[93960]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770052916.4077528-36791-74478092132798/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=c05a45844c01ac516fc883d7d16b3b5808c36afe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.305826325 +0000 UTC m=+0.031522100 container create bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:57 np0005605476 python3[94115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:57 np0005605476 systemd[1]: Started libpod-conmon-bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb.scope.
Feb  2 12:21:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:57 np0005605476 podman[94145]: 2026-02-02 17:21:57.375095309 +0000 UTC m=+0.037314124 container create 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.385990386 +0000 UTC m=+0.111686191 container init bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.291025507 +0000 UTC m=+0.016721302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.394410344 +0000 UTC m=+0.120106119 container start bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:21:57 np0005605476 trusting_chatelet[94153]: 167 167
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.398847889 +0000 UTC m=+0.124543684 container attach bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.400409653 +0000 UTC m=+0.126105448 container died bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:57 np0005605476 systemd[1]: Started libpod-conmon-6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e.scope.
Feb  2 12:21:57 np0005605476 systemd[1]: libpod-bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb.scope: Deactivated successfully.
Feb  2 12:21:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-beac623b10ace1e87a825a04fb5eb2544c4376b92a90658d1bbfe3a006fb0670-merged.mount: Deactivated successfully.
Feb  2 12:21:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:57 np0005605476 podman[94128]: 2026-02-02 17:21:57.435970126 +0000 UTC m=+0.161665921 container remove bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chatelet, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd2c258742d1fde0e8ae13345eb6c20570b4e07583c248dd178006ef2ba094a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd2c258742d1fde0e8ae13345eb6c20570b4e07583c248dd178006ef2ba094a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 systemd[1]: libpod-conmon-bd14193c831b9175c361c39dedadec09fb795402ccb892305199b2f081fd94cb.scope: Deactivated successfully.
Feb  2 12:21:57 np0005605476 podman[94145]: 2026-02-02 17:21:57.358310495 +0000 UTC m=+0.020529340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:57 np0005605476 podman[94145]: 2026-02-02 17:21:57.462993408 +0000 UTC m=+0.125212243 container init 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:21:57 np0005605476 podman[94145]: 2026-02-02 17:21:57.468330979 +0000 UTC m=+0.130549784 container start 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:21:57 np0005605476 podman[94145]: 2026-02-02 17:21:57.471601071 +0000 UTC m=+0.133819866 container attach 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:57 np0005605476 podman[94188]: 2026-02-02 17:21:57.547822421 +0000 UTC m=+0.035430110 container create 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:57 np0005605476 systemd[1]: Started libpod-conmon-75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a.scope.
Feb  2 12:21:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:57 np0005605476 podman[94188]: 2026-02-02 17:21:57.529239857 +0000 UTC m=+0.016847566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:57 np0005605476 podman[94188]: 2026-02-02 17:21:57.630301848 +0000 UTC m=+0.117909577 container init 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:21:57 np0005605476 podman[94188]: 2026-02-02 17:21:57.634107025 +0000 UTC m=+0.121714714 container start 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:57 np0005605476 podman[94188]: 2026-02-02 17:21:57.636777431 +0000 UTC m=+0.124385120 container attach 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: Saving service mds.cephfs spec with placement compute-0
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/790231431' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  2 12:21:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/790231431' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 clever_benz[94208]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:21:58 np0005605476 podman[94244]: 2026-02-02 17:21:58.051405617 +0000 UTC m=+0.032274342 container died 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:58 np0005605476 clever_benz[94208]: --> All data devices are unavailable
Feb  2 12:21:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fdd2c258742d1fde0e8ae13345eb6c20570b4e07583c248dd178006ef2ba094a-merged.mount: Deactivated successfully.
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 podman[94244]: 2026-02-02 17:21:58.090382886 +0000 UTC m=+0.071251561 container remove 6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e (image=quay.io/ceph/ceph:v20, name=elegant_carson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-conmon-6d1ac8468e2ab279ae7fef352f19e6938af64544d352374698add2f7b34acc3e.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 podman[94188]: 2026-02-02 17:21:58.097529328 +0000 UTC m=+0.585137027 container died 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:58 np0005605476 podman[94188]: 2026-02-02 17:21:58.133567354 +0000 UTC m=+0.621175114 container remove 75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-conmon-75ffe586554a24483451002f0bdcf781fee0f051091ab6ccc3c4d7f39c7ff70a.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ed444dd6ddeab15d175717cedd4a97c2cc0b9743c10457d47a9da2789f018303-merged.mount: Deactivated successfully.
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.549329553 +0000 UTC m=+0.036509911 container create d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:21:58 np0005605476 systemd[1]: Started libpod-conmon-d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81.scope.
Feb  2 12:21:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.533741623 +0000 UTC m=+0.020922011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.63466007 +0000 UTC m=+0.121840448 container init d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.638863448 +0000 UTC m=+0.126043806 container start d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.641238965 +0000 UTC m=+0.128419343 container attach d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:21:58 np0005605476 priceless_bouman[94377]: 167 167
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 conmon[94377]: conmon d9eceb0517791270f63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81.scope/container/memory.events
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.643747296 +0000 UTC m=+0.130927664 container died d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-43b3607c1198c10e61148639c6acc00d80ea8a6fc100eaf125d4bd7221f7ac15-merged.mount: Deactivated successfully.
Feb  2 12:21:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v62: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:21:58 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/790231431' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  2 12:21:58 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/790231431' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 12:21:58 np0005605476 podman[94335]: 2026-02-02 17:21:58.673375092 +0000 UTC m=+0.160555450 container remove d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:58 np0005605476 systemd[1]: libpod-conmon-d9eceb0517791270f63df07043aa56d579c1758a74b4ce79f3de4a883d26ea81.scope: Deactivated successfully.
Feb  2 12:21:58 np0005605476 python3[94379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:58 np0005605476 podman[94397]: 2026-02-02 17:21:58.763277068 +0000 UTC m=+0.035622796 container create ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:58 np0005605476 systemd[1]: Started libpod-conmon-ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2.scope.
Feb  2 12:21:58 np0005605476 podman[94416]: 2026-02-02 17:21:58.802215986 +0000 UTC m=+0.040621207 container create dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8a5e4cc385f86c0301493c9a2fcfb64b472c3bae55ffb9b88795a866fcea4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8a5e4cc385f86c0301493c9a2fcfb64b472c3bae55ffb9b88795a866fcea4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 systemd[1]: Started libpod-conmon-dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810.scope.
Feb  2 12:21:58 np0005605476 podman[94397]: 2026-02-02 17:21:58.829442994 +0000 UTC m=+0.101788752 container init ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aaa0bfd768e5bf7e9ddbcc4824af0c690a2f0a0eef98a97f91ba4dc89c7e09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aaa0bfd768e5bf7e9ddbcc4824af0c690a2f0a0eef98a97f91ba4dc89c7e09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 podman[94397]: 2026-02-02 17:21:58.834443085 +0000 UTC m=+0.106788813 container start ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aaa0bfd768e5bf7e9ddbcc4824af0c690a2f0a0eef98a97f91ba4dc89c7e09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aaa0bfd768e5bf7e9ddbcc4824af0c690a2f0a0eef98a97f91ba4dc89c7e09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:58 np0005605476 podman[94397]: 2026-02-02 17:21:58.83742198 +0000 UTC m=+0.109767728 container attach ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:21:58 np0005605476 podman[94397]: 2026-02-02 17:21:58.749677344 +0000 UTC m=+0.022023092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:58 np0005605476 podman[94416]: 2026-02-02 17:21:58.851400794 +0000 UTC m=+0.089806015 container init dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:21:58 np0005605476 podman[94416]: 2026-02-02 17:21:58.856956071 +0000 UTC m=+0.095361292 container start dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:21:58 np0005605476 podman[94416]: 2026-02-02 17:21:58.861428747 +0000 UTC m=+0.099834058 container attach dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:58 np0005605476 podman[94416]: 2026-02-02 17:21:58.787635255 +0000 UTC m=+0.026040496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]: {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    "0": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "devices": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "/dev/loop3"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            ],
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_name": "ceph_lv0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_size": "21470642176",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "name": "ceph_lv0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "tags": {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.crush_device_class": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.encrypted": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_id": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.vdo": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.with_tpm": "0"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            },
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "vg_name": "ceph_vg0"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        }
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    ],
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    "1": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "devices": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "/dev/loop4"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            ],
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_name": "ceph_lv1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_size": "21470642176",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "name": "ceph_lv1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "tags": {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.crush_device_class": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.encrypted": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_id": "1",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.vdo": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.with_tpm": "0"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            },
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "vg_name": "ceph_vg1"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        }
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    ],
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    "2": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "devices": [
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "/dev/loop5"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            ],
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_name": "ceph_lv2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_size": "21470642176",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "name": "ceph_lv2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "tags": {
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.cluster_name": "ceph",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.crush_device_class": "",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.encrypted": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.objectstore": "bluestore",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osd_id": "2",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.vdo": "0",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:                "ceph.with_tpm": "0"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            },
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "type": "block",
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:            "vg_name": "ceph_vg2"
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:        }
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]:    ]
Feb  2 12:21:59 np0005605476 reverent_wiles[94436]: }
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94416]: 2026-02-02 17:21:59.144712138 +0000 UTC m=+0.383117359 container died dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:59 np0005605476 podman[94416]: 2026-02-02 17:21:59.180782145 +0000 UTC m=+0.419187366 container remove dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-conmon-dc008d3e247dcac29b20e4dc0075ea279608a055ec1f1a6c66fb0c9f195ee810.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e7aaa0bfd768e5bf7e9ddbcc4824af0c690a2f0a0eef98a97f91ba4dc89c7e09-merged.mount: Deactivated successfully.
Feb  2 12:21:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 12:21:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006878451' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 12:21:59 np0005605476 upbeat_franklin[94431]: 
Feb  2 12:21:59 np0005605476 upbeat_franklin[94431]: {"fsid":"eb48d0ef-3496-563c-b73d-661fb962013e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":100,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1770052888,"num_in_osds":3,"osd_in_since":1770052869,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":69}],"num_pgs":69,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83775488,"bytes_avail":64328151040,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-02T17:21:54:772321+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T17:21:38.658205+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94397]: 2026-02-02 17:21:59.342274511 +0000 UTC m=+0.614620249 container died ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:21:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-be8a5e4cc385f86c0301493c9a2fcfb64b472c3bae55ffb9b88795a866fcea4e-merged.mount: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94397]: 2026-02-02 17:21:59.375020095 +0000 UTC m=+0.647365823 container remove ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2 (image=quay.io/ceph/ceph:v20, name=upbeat_franklin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-conmon-ddeb956776f2ce21b04bd8185064fe833ea3bf181a465f9616f796920a9c48e2.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.600713311 +0000 UTC m=+0.042048607 container create fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:21:59 np0005605476 systemd[1]: Started libpod-conmon-fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85.scope.
Feb  2 12:21:59 np0005605476 python3[94566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:21:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.664810339 +0000 UTC m=+0.106145635 container init fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.668640957 +0000 UTC m=+0.109976243 container start fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:59 np0005605476 objective_babbage[94595]: 167 167
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.672198878 +0000 UTC m=+0.113534164 container attach fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.672767414 +0000 UTC m=+0.114102700 container died fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.579721429 +0000 UTC m=+0.021056795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:21:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9435166c0a5da33e8e78bbbd5559a388a834bf8bd0c8fd7eeb6ee1fc1d737371-merged.mount: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94578]: 2026-02-02 17:21:59.701183725 +0000 UTC m=+0.142519001 container remove fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:21:59 np0005605476 systemd[1]: libpod-conmon-fa31ded695223391945f88829657d1fc72391eceacebc30ea89610c050dd7c85.scope: Deactivated successfully.
Feb  2 12:21:59 np0005605476 podman[94597]: 2026-02-02 17:21:59.708355338 +0000 UTC m=+0.052629176 container create 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:21:59 np0005605476 systemd[1]: Started libpod-conmon-9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b.scope.
Feb  2 12:21:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f4913003d8fd2f7f8f3d4a31e05d8246fcbe38cc00835fa651afdb9e31f048/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f4913003d8fd2f7f8f3d4a31e05d8246fcbe38cc00835fa651afdb9e31f048/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 podman[94597]: 2026-02-02 17:21:59.753578033 +0000 UTC m=+0.097851861 container init 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:21:59 np0005605476 podman[94597]: 2026-02-02 17:21:59.757972627 +0000 UTC m=+0.102246465 container start 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:21:59 np0005605476 podman[94597]: 2026-02-02 17:21:59.761485337 +0000 UTC m=+0.105759205 container attach 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:21:59 np0005605476 podman[94597]: 2026-02-02 17:21:59.671312063 +0000 UTC m=+0.015585921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:21:59 np0005605476 podman[94636]: 2026-02-02 17:21:59.798535022 +0000 UTC m=+0.030138662 container create 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:21:59 np0005605476 systemd[1]: Started libpod-conmon-1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a.scope.
Feb  2 12:21:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221cc857d6489765a34ca877a4d68be5a73fd10d7da5f42f2d9fb8e712cc5052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221cc857d6489765a34ca877a4d68be5a73fd10d7da5f42f2d9fb8e712cc5052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221cc857d6489765a34ca877a4d68be5a73fd10d7da5f42f2d9fb8e712cc5052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221cc857d6489765a34ca877a4d68be5a73fd10d7da5f42f2d9fb8e712cc5052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:21:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb  2 12:21:59 np0005605476 podman[94636]: 2026-02-02 17:21:59.869396221 +0000 UTC m=+0.100999871 container init 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:21:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb  2 12:21:59 np0005605476 podman[94636]: 2026-02-02 17:21:59.875444081 +0000 UTC m=+0.107047721 container start 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:21:59 np0005605476 podman[94636]: 2026-02-02 17:21:59.879900777 +0000 UTC m=+0.111504417 container attach 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:21:59 np0005605476 podman[94636]: 2026-02-02 17:21:59.786516093 +0000 UTC m=+0.018119733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672327419' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:22:00 np0005605476 admiring_knuth[94627]: 
Feb  2 12:22:00 np0005605476 admiring_knuth[94627]: {"epoch":1,"fsid":"eb48d0ef-3496-563c-b73d-661fb962013e","modified":"2026-02-02T17:20:15.057605Z","created":"2026-02-02T17:20:15.057605Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb  2 12:22:00 np0005605476 admiring_knuth[94627]: dumped monmap epoch 1
Feb  2 12:22:00 np0005605476 systemd[1]: libpod-9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b.scope: Deactivated successfully.
Feb  2 12:22:00 np0005605476 podman[94597]: 2026-02-02 17:22:00.284999024 +0000 UTC m=+0.629272862 container died 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:00 np0005605476 podman[94597]: 2026-02-02 17:22:00.312502339 +0000 UTC m=+0.656776177 container remove 9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b (image=quay.io/ceph/ceph:v20, name=admiring_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c9f4913003d8fd2f7f8f3d4a31e05d8246fcbe38cc00835fa651afdb9e31f048-merged.mount: Deactivated successfully.
Feb  2 12:22:00 np0005605476 systemd[1]: libpod-conmon-9ddd5196e62aa5d7bf38a42e12eaad6662916d6f9a49c09c8ed48c123dd2c07b.scope: Deactivated successfully.
Feb  2 12:22:00 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb  2 12:22:00 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb  2 12:22:00 np0005605476 lvm[94760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:22:00 np0005605476 lvm[94760]: VG ceph_vg0 finished
Feb  2 12:22:00 np0005605476 lvm[94763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:22:00 np0005605476 lvm[94763]: VG ceph_vg1 finished
Feb  2 12:22:00 np0005605476 lvm[94765]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:22:00 np0005605476 lvm[94765]: VG ceph_vg2 finished
Feb  2 12:22:00 np0005605476 frosty_noyce[94652]: {}
Feb  2 12:22:00 np0005605476 systemd[1]: libpod-1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a.scope: Deactivated successfully.
Feb  2 12:22:00 np0005605476 podman[94636]: 2026-02-02 17:22:00.546876861 +0000 UTC m=+0.778480561 container died 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:22:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-221cc857d6489765a34ca877a4d68be5a73fd10d7da5f42f2d9fb8e712cc5052-merged.mount: Deactivated successfully.
Feb  2 12:22:00 np0005605476 podman[94636]: 2026-02-02 17:22:00.592974801 +0000 UTC m=+0.824578451 container remove 1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:22:00 np0005605476 systemd[1]: libpod-conmon-1ef75702eedb27264cc3763c09b13fbc0868d222912c315eedb81dcaff1ddc3a.scope: Deactivated successfully.
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:00 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 49f441ee-e2cf-4fbf-9c78-f583b197ae89 (Updating rgw.rgw deployment (+1 -> 1))
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.molmny", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.molmny", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.molmny", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 12:22:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v63: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:00 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.molmny on compute-0
Feb  2 12:22:00 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.molmny on compute-0
Feb  2 12:22:00 np0005605476 python3[94805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:00 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb  2 12:22:00 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb  2 12:22:00 np0005605476 podman[94856]: 2026-02-02 17:22:00.874843972 +0000 UTC m=+0.051810282 container create 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:00 np0005605476 systemd[1]: Started libpod-conmon-80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221.scope.
Feb  2 12:22:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5777ae55fb35cf44d6f62b731585028069a5876ee1a4eba0eea2ba82665ad62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5777ae55fb35cf44d6f62b731585028069a5876ee1a4eba0eea2ba82665ad62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:00 np0005605476 podman[94856]: 2026-02-02 17:22:00.94636527 +0000 UTC m=+0.123331590 container init 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:22:00 np0005605476 podman[94856]: 2026-02-02 17:22:00.855820356 +0000 UTC m=+0.032786676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:00 np0005605476 podman[94856]: 2026-02-02 17:22:00.95451755 +0000 UTC m=+0.131483840 container start 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:00 np0005605476 podman[94856]: 2026-02-02 17:22:00.957986228 +0000 UTC m=+0.134952618 container attach 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.molmny", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.molmny", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.130642768 +0000 UTC m=+0.030548393 container create 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:01 np0005605476 systemd[1]: Started libpod-conmon-2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c.scope.
Feb  2 12:22:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.178763166 +0000 UTC m=+0.078668791 container init 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.182388738 +0000 UTC m=+0.082294363 container start 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:01 np0005605476 agitated_lederberg[94952]: 167 167
Feb  2 12:22:01 np0005605476 systemd[1]: libpod-2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c.scope: Deactivated successfully.
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.190641181 +0000 UTC m=+0.090546816 container attach 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.191023201 +0000 UTC m=+0.090928826 container died 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.117924939 +0000 UTC m=+0.017830584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-92df9f1249f37c86f9b77d86644ebf5b8e71f05f51654616179292b6c89ad56c-merged.mount: Deactivated successfully.
Feb  2 12:22:01 np0005605476 podman[94935]: 2026-02-02 17:22:01.23953893 +0000 UTC m=+0.139444555 container remove 2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:22:01 np0005605476 systemd[1]: libpod-conmon-2742d32c293c27e4f30795ea6a067f40cf7380fec868924909d5a05d45a59c7c.scope: Deactivated successfully.
Feb  2 12:22:01 np0005605476 systemd[1]: Reloading.
Feb  2 12:22:01 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:22:01 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:22:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb  2 12:22:01 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb  2 12:22:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb  2 12:22:01 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb  2 12:22:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2471799885' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  2 12:22:01 np0005605476 eager_feynman[94872]: [client.openstack]
Feb  2 12:22:01 np0005605476 eager_feynman[94872]: #011key = AQCx3IBpAAAAABAAwfd9vryP50N2U55y9ozPvw==
Feb  2 12:22:01 np0005605476 eager_feynman[94872]: #011caps mgr = "allow *"
Feb  2 12:22:01 np0005605476 eager_feynman[94872]: #011caps mon = "profile rbd"
Feb  2 12:22:01 np0005605476 eager_feynman[94872]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb  2 12:22:01 np0005605476 podman[94856]: 2026-02-02 17:22:01.458833466 +0000 UTC m=+0.635799766 container died 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:01 np0005605476 systemd[1]: libpod-80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221.scope: Deactivated successfully.
Feb  2 12:22:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e5777ae55fb35cf44d6f62b731585028069a5876ee1a4eba0eea2ba82665ad62-merged.mount: Deactivated successfully.
Feb  2 12:22:01 np0005605476 podman[94856]: 2026-02-02 17:22:01.500142371 +0000 UTC m=+0.677108691 container remove 80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221 (image=quay.io/ceph/ceph:v20, name=eager_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:22:01 np0005605476 systemd[1]: libpod-conmon-80c4e7a671a6718ff5c45bdb6bcb2be2e16cf67e5a0086b061d2fdea55331221.scope: Deactivated successfully.
Feb  2 12:22:01 np0005605476 systemd[1]: Reloading.
Feb  2 12:22:01 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:22:01 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:22:01 np0005605476 systemd[1]: Starting Ceph rgw.rgw.compute-0.molmny for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:22:01 np0005605476 podman[95109]: 2026-02-02 17:22:01.960355914 +0000 UTC m=+0.052209604 container create 31c23500b425f65c54ced9f2de36b162413b6fa358f90c128fa457114a0a693e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-rgw-rgw-compute-0-molmny, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:22:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc235568d5ff1570e05cad10c8716d6f325a02e28f068a1c35e5f616b2837a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc235568d5ff1570e05cad10c8716d6f325a02e28f068a1c35e5f616b2837a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc235568d5ff1570e05cad10c8716d6f325a02e28f068a1c35e5f616b2837a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc235568d5ff1570e05cad10c8716d6f325a02e28f068a1c35e5f616b2837a2/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.molmny supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: Deploying daemon rgw.rgw.compute-0.molmny on compute-0
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2471799885' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  2 12:22:02 np0005605476 podman[95109]: 2026-02-02 17:22:02.036148402 +0000 UTC m=+0.128002172 container init 31c23500b425f65c54ced9f2de36b162413b6fa358f90c128fa457114a0a693e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-rgw-rgw-compute-0-molmny, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:22:02 np0005605476 podman[95109]: 2026-02-02 17:22:01.943600341 +0000 UTC m=+0.035454061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:02 np0005605476 podman[95109]: 2026-02-02 17:22:02.042440289 +0000 UTC m=+0.134294009 container start 31c23500b425f65c54ced9f2de36b162413b6fa358f90c128fa457114a0a693e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-rgw-rgw-compute-0-molmny, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:22:02 np0005605476 bash[95109]: 31c23500b425f65c54ced9f2de36b162413b6fa358f90c128fa457114a0a693e
Feb  2 12:22:02 np0005605476 systemd[1]: Started Ceph rgw.rgw.compute-0.molmny for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:22:02 np0005605476 radosgw[95129]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:22:02 np0005605476 radosgw[95129]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb  2 12:22:02 np0005605476 radosgw[95129]: framework: beast
Feb  2 12:22:02 np0005605476 radosgw[95129]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb  2 12:22:02 np0005605476 radosgw[95129]: init_numa not setting numa affinity
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 49f441ee-e2cf-4fbf-9c78-f583b197ae89 (Updating rgw.rgw deployment (+1 -> 1))
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 49f441ee-e2cf-4fbf-9c78-f583b197ae89 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 6079a562-b462-48e4-a058-1f351e984ddc (Updating mds.cephfs deployment (+1 -> 1))
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vvdoei", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vvdoei", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vvdoei", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.vvdoei on compute-0
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.vvdoei on compute-0
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 6 completed events
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:22:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.577088981 +0000 UTC m=+0.038496817 container create b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:02 np0005605476 systemd[1]: Started libpod-conmon-b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3.scope.
Feb  2 12:22:02 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.646233852 +0000 UTC m=+0.107641698 container init b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.652012195 +0000 UTC m=+0.113420011 container start b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.557647633 +0000 UTC m=+0.019055479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:02 np0005605476 quirky_jepsen[95409]: 167 167
Feb  2 12:22:02 np0005605476 systemd[1]: libpod-b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3.scope: Deactivated successfully.
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.65751168 +0000 UTC m=+0.118919526 container attach b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.658160098 +0000 UTC m=+0.119567924 container died b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:22:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v64: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1649d39cae5e99448022a526864709d4c9d310f50f786bb561372db3b1616ede-merged.mount: Deactivated successfully.
Feb  2 12:22:02 np0005605476 podman[95347]: 2026-02-02 17:22:02.689490122 +0000 UTC m=+0.150897948 container remove b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:22:02 np0005605476 systemd[1]: libpod-conmon-b8dd6db550853ae5caf7fb15227a966005376d203e9c5317923d1b5eefc3d2b3.scope: Deactivated successfully.
Feb  2 12:22:02 np0005605476 systemd[1]: Reloading.
Feb  2 12:22:02 np0005605476 ansible-async_wrapper.py[95411]: Invoked with j150128337109 30 /home/zuul/.ansible/tmp/ansible-tmp-1770052922.3759606-36863-204847212225996/AnsiballZ_command.py _
Feb  2 12:22:02 np0005605476 ansible-async_wrapper.py[95433]: Starting module and watcher
Feb  2 12:22:02 np0005605476 ansible-async_wrapper.py[95433]: Start watching 95434 (30)
Feb  2 12:22:02 np0005605476 ansible-async_wrapper.py[95434]: Start module (95434)
Feb  2 12:22:02 np0005605476 ansible-async_wrapper.py[95411]: Return async_wrapper task started.
Feb  2 12:22:02 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:22:02 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:22:02 np0005605476 python3[95435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:02 np0005605476 podman[95470]: 2026-02-02 17:22:02.967602867 +0000 UTC m=+0.044749173 container create 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:22:03 np0005605476 systemd[1]: Started libpod-conmon-48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575.scope.
Feb  2 12:22:03 np0005605476 systemd[1]: Reloading.
Feb  2 12:22:03 np0005605476 podman[95470]: 2026-02-02 17:22:02.949748403 +0000 UTC m=+0.026894739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:03 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:22:03 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: Saving service rgw.rgw spec with placement compute-0
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vvdoei", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vvdoei", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: Deploying daemon mds.cephfs.compute-0.vvdoei on compute-0
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2536784476' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  2 12:22:03 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ee42635438fba481b22d4ea85cafc32fd892284fb86d9967ed4e06640e90f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 systemd[1]: Starting Ceph mds.cephfs.compute-0.vvdoei for eb48d0ef-3496-563c-b73d-661fb962013e...
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ee42635438fba481b22d4ea85cafc32fd892284fb86d9967ed4e06640e90f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 podman[95470]: 2026-02-02 17:22:03.251532406 +0000 UTC m=+0.328678742 container init 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:22:03 np0005605476 podman[95470]: 2026-02-02 17:22:03.256010903 +0000 UTC m=+0.333157219 container start 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:22:03 np0005605476 podman[95470]: 2026-02-02 17:22:03.259076689 +0000 UTC m=+0.336223005 container attach 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:22:03 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb  2 12:22:03 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb  2 12:22:03 np0005605476 podman[95595]: 2026-02-02 17:22:03.429670892 +0000 UTC m=+0.037145419 container create 30ea8cb4e62f17278ebefb0cae478bc9b6467ba0762d054bfe95a0976430da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mds-cephfs-compute-0-vvdoei, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a407352bffbd8c32c0c5fae65a069ae35ec5a4538ab19f169eac7fb4d31648b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a407352bffbd8c32c0c5fae65a069ae35ec5a4538ab19f169eac7fb4d31648b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a407352bffbd8c32c0c5fae65a069ae35ec5a4538ab19f169eac7fb4d31648b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a407352bffbd8c32c0c5fae65a069ae35ec5a4538ab19f169eac7fb4d31648b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.vvdoei supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:03 np0005605476 podman[95595]: 2026-02-02 17:22:03.486979658 +0000 UTC m=+0.094454195 container init 30ea8cb4e62f17278ebefb0cae478bc9b6467ba0762d054bfe95a0976430da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mds-cephfs-compute-0-vvdoei, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:03 np0005605476 podman[95595]: 2026-02-02 17:22:03.498476963 +0000 UTC m=+0.105951500 container start 30ea8cb4e62f17278ebefb0cae478bc9b6467ba0762d054bfe95a0976430da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mds-cephfs-compute-0-vvdoei, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:03 np0005605476 bash[95595]: 30ea8cb4e62f17278ebefb0cae478bc9b6467ba0762d054bfe95a0976430da56
Feb  2 12:22:03 np0005605476 podman[95595]: 2026-02-02 17:22:03.411240652 +0000 UTC m=+0.018715209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:03 np0005605476 systemd[1]: Started Ceph mds.cephfs.compute-0.vvdoei for eb48d0ef-3496-563c-b73d-661fb962013e.
Feb  2 12:22:03 np0005605476 ceph-mds[95614]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:22:03 np0005605476 ceph-mds[95614]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb  2 12:22:03 np0005605476 ceph-mds[95614]: main not setting numa affinity
Feb  2 12:22:03 np0005605476 ceph-mds[95614]: pidfile_write: ignore empty --pid-file
Feb  2 12:22:03 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mds-cephfs-compute-0-vvdoei[95610]: starting mds.cephfs.compute-0.vvdoei at 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:03 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei Updating MDS map to version 2 from mon.0
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 6079a562-b462-48e4-a058-1f351e984ddc (Updating mds.cephfs deployment (+1 -> 1))
Feb  2 12:22:03 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 6079a562-b462-48e4-a058-1f351e984ddc (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:03 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:22:03 np0005605476 priceless_swirles[95491]: 
Feb  2 12:22:03 np0005605476 priceless_swirles[95491]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 12:22:03 np0005605476 systemd[1]: libpod-48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575.scope: Deactivated successfully.
Feb  2 12:22:03 np0005605476 podman[95688]: 2026-02-02 17:22:03.73235562 +0000 UTC m=+0.027227800 container died 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:22:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a6ee42635438fba481b22d4ea85cafc32fd892284fb86d9967ed4e06640e90f1-merged.mount: Deactivated successfully.
Feb  2 12:22:03 np0005605476 podman[95688]: 2026-02-02 17:22:03.766827682 +0000 UTC m=+0.061699842 container remove 48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575 (image=quay.io/ceph/ceph:v20, name=priceless_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:22:03 np0005605476 systemd[1]: libpod-conmon-48732cfbe79c19f5bb85eb39099f94a65c4d92092de74bf4cb2e231474eca575.scope: Deactivated successfully.
Feb  2 12:22:03 np0005605476 ansible-async_wrapper.py[95434]: Module complete (95434)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/2186716036,v1:192.168.122.100:6815/2186716036] as mds.0
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vvdoei assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 12:22:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e3 new map
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-02T17:22:03:998510+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T17:21:54.771913+0000#012modified#0112026-02-02T17:22:03.998499+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.vvdoei{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/2186716036,v1:192.168.122.100:6815/2186716036] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei Updating MDS map to version 3 from mon.0
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.3 handle_mds_map I am now mds.0.3
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x1
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2186716036,v1:192.168.122.100:6815/2186716036] up:boot
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vvdoei=up:creating}
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.vvdoei"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.vvdoei"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e3 all = 0
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x100
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x600
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x601
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x602
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x603
Feb  2 12:22:04 np0005605476 python3[95773]: ansible-ansible.legacy.async_status Invoked with jid=j150128337109.95411 mode=status _async_dir=/root/.ansible_async
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x604
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x605
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x606
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x607
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x608
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.cache creating system inode with ino:0x609
Feb  2 12:22:04 np0005605476 ceph-mds[95614]: mds.0.3 creating_done
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vvdoei is now active in filesystem cephfs as rank 0
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2536784476' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2536784476' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: daemon mds.cephfs.compute-0.vvdoei assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: Cluster is now healthy
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: daemon mds.cephfs.compute-0.vvdoei is now active in filesystem cephfs as rank 0
Feb  2 12:22:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:04 np0005605476 podman[95874]: 2026-02-02 17:22:04.153378185 +0000 UTC m=+0.049395754 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:04 np0005605476 podman[95874]: 2026-02-02 17:22:04.24145955 +0000 UTC m=+0.137477079 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:04 np0005605476 python3[95888]: ansible-ansible.legacy.async_status Invoked with jid=j150128337109.95411 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 12:22:04 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb  2 12:22:04 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb  2 12:22:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v67: 70 pgs: 70 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 895 B/s wr, 1 op/s
Feb  2 12:22:04 np0005605476 python3[96600]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:04 np0005605476 podman[96648]: 2026-02-02 17:22:04.816354496 +0000 UTC m=+0.046223715 container create 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:04 np0005605476 systemd[1]: Started libpod-conmon-23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640.scope.
Feb  2 12:22:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e432334d1c6de91414ab0462d5a62d65d2b237013621acf089ab1949f14613b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e432334d1c6de91414ab0462d5a62d65d2b237013621acf089ab1949f14613b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:04 np0005605476 podman[96648]: 2026-02-02 17:22:04.795732545 +0000 UTC m=+0.025601814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:04 np0005605476 podman[96648]: 2026-02-02 17:22:04.909524465 +0000 UTC m=+0.139393774 container init 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:22:04 np0005605476 podman[96648]: 2026-02-02 17:22:04.915047271 +0000 UTC m=+0.144916490 container start 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:22:04 np0005605476 podman[96648]: 2026-02-02 17:22:04.918757275 +0000 UTC m=+0.148626534 container attach 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e4 new map
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-02T17:22:05:002694+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T17:21:54.771913+0000#012modified#0112026-02-02T17:22:05.002691+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.vvdoei{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2186716036,v1:192.168.122.100:6815/2186716036] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 12:22:05 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei Updating MDS map to version 4 from mon.0
Feb  2 12:22:05 np0005605476 ceph-mds[95614]: mds.0.3 handle_mds_map I am now mds.0.3
Feb  2 12:22:05 np0005605476 ceph-mds[95614]: mds.0.3 handle_mds_map state change up:creating --> up:active
Feb  2 12:22:05 np0005605476 ceph-mds[95614]: mds.0.3 recovery_done -- successful recovery!
Feb  2 12:22:05 np0005605476 ceph-mds[95614]: mds.0.3 active_start
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2186716036,v1:192.168.122.100:6815/2186716036] up:active
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vvdoei=up:active}
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/2536784476' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb  2 12:22:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.239594176 +0000 UTC m=+0.041904113 container create f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:22:05 np0005605476 systemd[1]: Started libpod-conmon-f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce.scope.
Feb  2 12:22:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:05 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.300897025 +0000 UTC m=+0.103207042 container init f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:22:05 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.307218693 +0000 UTC m=+0.109528660 container start f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:22:05 np0005605476 exciting_hamilton[96771]: 167 167
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.310924398 +0000 UTC m=+0.113234425 container attach f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:22:05 np0005605476 systemd[1]: libpod-f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce.scope: Deactivated successfully.
Feb  2 12:22:05 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb  2 12:22:05 np0005605476 conmon[96771]: conmon f832ed757f400dfa2133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce.scope/container/memory.events
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.312678477 +0000 UTC m=+0.114988454 container died f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.225272742 +0000 UTC m=+0.027582699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-66c8a3d7a8790abf4ac88c073d80c2ef4ad34e31718fa58e84b41c018c377d14-merged.mount: Deactivated successfully.
Feb  2 12:22:05 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:22:05 np0005605476 determined_proskuriakova[96666]: 
Feb  2 12:22:05 np0005605476 determined_proskuriakova[96666]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 12:22:05 np0005605476 podman[96756]: 2026-02-02 17:22:05.350144284 +0000 UTC m=+0.152454221 container remove f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:05 np0005605476 systemd[1]: libpod-conmon-f832ed757f400dfa21338eae4a161d48f484cdc4f7ebe175f0df58aee94e49ce.scope: Deactivated successfully.
Feb  2 12:22:05 np0005605476 systemd[1]: libpod-23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640.scope: Deactivated successfully.
Feb  2 12:22:05 np0005605476 podman[96648]: 2026-02-02 17:22:05.36773371 +0000 UTC m=+0.597602949 container died 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6e432334d1c6de91414ab0462d5a62d65d2b237013621acf089ab1949f14613b-merged.mount: Deactivated successfully.
Feb  2 12:22:05 np0005605476 podman[96648]: 2026-02-02 17:22:05.406287228 +0000 UTC m=+0.636156437 container remove 23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640 (image=quay.io/ceph/ceph:v20, name=determined_proskuriakova, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:05 np0005605476 systemd[1]: libpod-conmon-23e59fcc91a64f5ea9fdd70b54bb2aa02ed413765aa97446f290454c3bcff640.scope: Deactivated successfully.
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.47973709 +0000 UTC m=+0.036777628 container create 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:22:05 np0005605476 systemd[1]: Started libpod-conmon-59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31.scope.
Feb  2 12:22:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.462250807 +0000 UTC m=+0.019291365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.562954067 +0000 UTC m=+0.119994585 container init 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.573158815 +0000 UTC m=+0.130199333 container start 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.575971535 +0000 UTC m=+0.133012083 container attach 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:22:05 np0005605476 hungry_wing[96825]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:22:05 np0005605476 hungry_wing[96825]: --> All data devices are unavailable
Feb  2 12:22:05 np0005605476 systemd[1]: libpod-59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31.scope: Deactivated successfully.
Feb  2 12:22:05 np0005605476 podman[96808]: 2026-02-02 17:22:05.999373468 +0000 UTC m=+0.556414246 container died 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4bae0e9b1dcaa817239397015193d49db432dd03ec8f70477a8c6aef97d3ea2f-merged.mount: Deactivated successfully.
Feb  2 12:22:06 np0005605476 podman[96808]: 2026-02-02 17:22:06.048694759 +0000 UTC m=+0.605735277 container remove 59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-conmon-59fdc16f85944d2cae99a70132d7b64177aea78b33d921837bb53a5e92a00a31.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 python3[96870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.166566024 +0000 UTC m=+0.033434744 container create 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb  2 12:22:06 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:06 np0005605476 systemd[1]: Started libpod-conmon-0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28.scope.
Feb  2 12:22:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bc2ce42b68326d8db25f5af6da7d25ecfd1fc143e435a247785ad1b0db90e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bc2ce42b68326d8db25f5af6da7d25ecfd1fc143e435a247785ad1b0db90e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.246663054 +0000 UTC m=+0.113531784 container init 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.152995992 +0000 UTC m=+0.019864732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.251789718 +0000 UTC m=+0.118658438 container start 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.255774021 +0000 UTC m=+0.122642741 container attach 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:22:06 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb  2 12:22:06 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.468699987 +0000 UTC m=+0.035711668 container create a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:06 np0005605476 systemd[1]: Started libpod-conmon-a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377.scope.
Feb  2 12:22:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.517654008 +0000 UTC m=+0.084665709 container init a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.521499177 +0000 UTC m=+0.088510848 container start a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.525599812 +0000 UTC m=+0.092611503 container attach a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:06 np0005605476 trusting_varahamihira[97004]: 167 167
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 conmon[97004]: conmon a7e13e13694021c3af78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377.scope/container/memory.events
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.52692954 +0000 UTC m=+0.093941221 container died a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:22:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2d2830be1f1cbccf2577c9ecdfdd48a557085457b5e6c365940c24b21ebc670f-merged.mount: Deactivated successfully.
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.45426128 +0000 UTC m=+0.021272991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:06 np0005605476 podman[96987]: 2026-02-02 17:22:06.556424422 +0000 UTC m=+0.123436103 container remove a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_varahamihira, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-conmon-a7e13e13694021c3af787934134baab7f6f7a1ef4133f2bedd1b03b9f49fe377.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} v 0)
Feb  2 12:22:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} : dispatch
Feb  2 12:22:06 np0005605476 stupefied_swanson[96950]: 
Feb  2 12:22:06 np0005605476 stupefied_swanson[96950]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.660640922 +0000 UTC m=+0.037145199 container create 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v70: 71 pgs: 1 unknown, 70 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.676352085 +0000 UTC m=+0.543220805 container died 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:06 np0005605476 systemd[1]: Started libpod-conmon-38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714.scope.
Feb  2 12:22:06 np0005605476 podman[96904]: 2026-02-02 17:22:06.712961108 +0000 UTC m=+0.579829828 container remove 0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28 (image=quay.io/ceph/ceph:v20, name=stupefied_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dee9a4f5ba31d366aa1f5e3a0b6fcce3d251a2d1b54b876062c057216dcca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dee9a4f5ba31d366aa1f5e3a0b6fcce3d251a2d1b54b876062c057216dcca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dee9a4f5ba31d366aa1f5e3a0b6fcce3d251a2d1b54b876062c057216dcca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dee9a4f5ba31d366aa1f5e3a0b6fcce3d251a2d1b54b876062c057216dcca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-conmon-0c4e4f964aa72488e262d56fa9dad447c07579dbfd094bfe442be0ab694a0a28.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.736030178 +0000 UTC m=+0.112534475 container init 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.642937962 +0000 UTC m=+0.019442289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.740360741 +0000 UTC m=+0.116865018 container start 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.742980495 +0000 UTC m=+0.119484792 container attach 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]: {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    "0": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "devices": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "/dev/loop3"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            ],
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_name": "ceph_lv0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_size": "21470642176",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "name": "ceph_lv0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "tags": {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.crush_device_class": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.encrypted": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_id": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.vdo": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.with_tpm": "0"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            },
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "vg_name": "ceph_vg0"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        }
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    ],
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    "1": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "devices": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "/dev/loop4"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            ],
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_name": "ceph_lv1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_size": "21470642176",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "name": "ceph_lv1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "tags": {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.crush_device_class": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.encrypted": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_id": "1",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.vdo": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.with_tpm": "0"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            },
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "vg_name": "ceph_vg1"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        }
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    ],
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    "2": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "devices": [
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "/dev/loop5"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            ],
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_name": "ceph_lv2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_size": "21470642176",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "name": "ceph_lv2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "tags": {
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.crush_device_class": "",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.encrypted": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osd_id": "2",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.vdo": "0",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:                "ceph.with_tpm": "0"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            },
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "type": "block",
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:            "vg_name": "ceph_vg2"
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:        }
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]:    ]
Feb  2 12:22:06 np0005605476 frosty_torvalds[97054]: }
Feb  2 12:22:06 np0005605476 systemd[1]: libpod-38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714.scope: Deactivated successfully.
Feb  2 12:22:06 np0005605476 podman[97029]: 2026-02-02 17:22:06.975298128 +0000 UTC m=+0.351802415 container died 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-91bc2ce42b68326d8db25f5af6da7d25ecfd1fc143e435a247785ad1b0db90e0-merged.mount: Deactivated successfully.
Feb  2 12:22:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c0dee9a4f5ba31d366aa1f5e3a0b6fcce3d251a2d1b54b876062c057216dcca3-merged.mount: Deactivated successfully.
Feb  2 12:22:07 np0005605476 podman[97029]: 2026-02-02 17:22:07.093438951 +0000 UTC m=+0.469943268 container remove 38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:22:07 np0005605476 systemd[1]: libpod-conmon-38d23676dee0d64334e4812facd130004b1ddfe007558197e628b9bedeb67714.scope: Deactivated successfully.
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb  2 12:22:07 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.457326586 +0000 UTC m=+0.031319735 container create 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:22:07 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb  2 12:22:07 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:07 np0005605476 systemd[1]: Started libpod-conmon-3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8.scope.
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:07 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 7 completed events
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:22:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.523278166 +0000 UTC m=+0.097271325 container init 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.527689817 +0000 UTC m=+0.101682976 container start 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:22:07 np0005605476 sweet_antonelli[97183]: 167 167
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.530684159 +0000 UTC m=+0.104677338 container attach 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:07 np0005605476 systemd[1]: libpod-3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8.scope: Deactivated successfully.
Feb  2 12:22:07 np0005605476 conmon[97183]: conmon 3d951245ff91e8829a51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8.scope/container/memory.events
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.531891773 +0000 UTC m=+0.105884932 container died 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.443553987 +0000 UTC m=+0.017547146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c6835ea0aac4c101c2f87edbac1aa25afdf4646079a6668345cdf870e46d5843-merged.mount: Deactivated successfully.
Feb  2 12:22:07 np0005605476 podman[97165]: 2026-02-02 17:22:07.559658105 +0000 UTC m=+0.133651254 container remove 3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 12:22:07 np0005605476 python3[97173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:07 np0005605476 systemd[1]: libpod-conmon-3d951245ff91e8829a51cda67eaffb490acc8c3276e24c9de1ee3504d76dddf8.scope: Deactivated successfully.
Feb  2 12:22:07 np0005605476 podman[97200]: 2026-02-02 17:22:07.600968339 +0000 UTC m=+0.029201603 container create 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:07 np0005605476 systemd[1]: Started libpod-conmon-9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e.scope.
Feb  2 12:22:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adbeffbbd7a6bed723cab8c268d52d6f08fc7e01144ae8f18c24a79842db9c67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adbeffbbd7a6bed723cab8c268d52d6f08fc7e01144ae8f18c24a79842db9c67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 podman[97200]: 2026-02-02 17:22:07.653537552 +0000 UTC m=+0.081770836 container init 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:22:07 np0005605476 podman[97200]: 2026-02-02 17:22:07.657303936 +0000 UTC m=+0.085537200 container start 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:22:07 np0005605476 podman[97200]: 2026-02-02 17:22:07.660120893 +0000 UTC m=+0.088354187 container attach 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:07 np0005605476 podman[97224]: 2026-02-02 17:22:07.671753782 +0000 UTC m=+0.033113360 container create e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:07 np0005605476 podman[97200]: 2026-02-02 17:22:07.588829966 +0000 UTC m=+0.017063250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:07 np0005605476 systemd[1]: Started libpod-conmon-e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9.scope.
Feb  2 12:22:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af36b95918f33823e5ab4e7cbe7d11e0924191c806fe2458bdcd224bdd0b6af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af36b95918f33823e5ab4e7cbe7d11e0924191c806fe2458bdcd224bdd0b6af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af36b95918f33823e5ab4e7cbe7d11e0924191c806fe2458bdcd224bdd0b6af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af36b95918f33823e5ab4e7cbe7d11e0924191c806fe2458bdcd224bdd0b6af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:07 np0005605476 podman[97224]: 2026-02-02 17:22:07.740686815 +0000 UTC m=+0.102046423 container init e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:22:07 np0005605476 podman[97224]: 2026-02-02 17:22:07.744713656 +0000 UTC m=+0.106073264 container start e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:22:07 np0005605476 podman[97224]: 2026-02-02 17:22:07.748123039 +0000 UTC m=+0.109482677 container attach e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:07 np0005605476 podman[97224]: 2026-02-02 17:22:07.65928885 +0000 UTC m=+0.020648468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:07 np0005605476 ansible-async_wrapper.py[95433]: Done in kid B.
Feb  2 12:22:07 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb  2 12:22:07 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb  2 12:22:08 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 12:22:08 np0005605476 exciting_jennings[97222]: 
Feb  2 12:22:08 np0005605476 exciting_jennings[97222]: [{"container_id": "43c95f496257", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2026-02-02T17:20:55.796554Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-02T17:20:55.854466Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.818244Z", "memory_usage": 7782531, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-02T17:20:55.729237Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@crash.compute-0", "version": "20.2.0"}, {"container_id": "30ea8cb4e62f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.81%", "created": "2026-02-02T17:22:03.507056Z", "daemon_id": "cephfs.compute-0.vvdoei", "daemon_name": "mds.cephfs.compute-0.vvdoei", "daemon_type": "mds", "events": ["2026-02-02T17:22:03.567519Z daemon:mds.cephfs.compute-0.vvdoei [INFO] \"Deployed mds.cephfs.compute-0.vvdoei on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.819083Z", "memory_usage": 17752391, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-02T17:22:03.415398Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mds.cephfs.compute-0.vvdoei", "version": "20.2.0"}, {"container_id": "f51dea2484a8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "20.00%", "created": "2026-02-02T17:20:21.278580Z", "daemon_id": "compute-0.hccdnu", "daemon_name": "mgr.compute-0.hccdnu", "daemon_type": "mgr", "events": ["2026-02-02T17:20:59.707195Z daemon:mgr.compute-0.hccdnu [INFO] \"Reconfigured mgr.compute-0.hccdnu on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.818086Z", "memory_usage": 545259520, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-02T17:20:20.681624Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mgr.compute-0.hccdnu", "version": "20.2.0"}, {"container_id": "49cf60189998", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.78%", "created": "2026-02-02T17:20:16.829454Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-02T17:20:59.180568Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.817867Z", "memory_request": 2147483648, "memory_usage": 40485519, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-02T17:20:18.793886Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@mon.compute-0", "version": "20.2.0"}, {"container_id": "5ec5d30977a0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.64%", "created": "2026-02-02T17:21:15.376963Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-02T17:21:15.428650Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.818394Z", "memory_request": 4294967296, "memory_usage": 61069066, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T17:21:15.305721Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@osd.0", "version": "20.2.0"}, {"container_id": "849770b4bec6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.98%", "created": "2026-02-02T17:21:19.123027Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-02T17:21:19.190366Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.818541Z", "memory_request": 4294967296, "memory_usage": 61949870, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T17:21:18.997711Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@osd.1", "version": "20.2.0"}, {"container_id": "49ee9de1004a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.93%", "created": "2026-02-02T17:21:22.736089Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-02T17:21:22.833122Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T17:22:04.818687Z", "memory_request": 4294967296, "memory_usage": 62956503, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T17:21:22.625291Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb48d0ef-3496-563c-b73d-661fb962013e@osd.2", "version": "20.2.0"}, {"container_id": "31c23500b425", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Feb  2 12:22:08 np0005605476 systemd[1]: libpod-9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e.scope: Deactivated successfully.
Feb  2 12:22:08 np0005605476 podman[97200]: 2026-02-02 17:22:08.076389402 +0000 UTC m=+0.504622676 container died 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:08 np0005605476 systemd[1]: var-lib-containers-storage-overlay-adbeffbbd7a6bed723cab8c268d52d6f08fc7e01144ae8f18c24a79842db9c67-merged.mount: Deactivated successfully.
Feb  2 12:22:08 np0005605476 podman[97200]: 2026-02-02 17:22:08.112143623 +0000 UTC m=+0.540376887 container remove 9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e (image=quay.io/ceph/ceph:v20, name=exciting_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:22:08 np0005605476 systemd[1]: libpod-conmon-9ff5d740272093fd886baa9453757fdb9e54a686c6e4558a444b19254834898e.scope: Deactivated successfully.
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb  2 12:22:08 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:08 np0005605476 lvm[97351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:22:08 np0005605476 lvm[97351]: VG ceph_vg0 finished
Feb  2 12:22:08 np0005605476 lvm[97354]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:22:08 np0005605476 lvm[97354]: VG ceph_vg1 finished
Feb  2 12:22:08 np0005605476 rsyslogd[1006]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "43c95f496257", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb  2 12:22:08 np0005605476 lvm[97356]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:22:08 np0005605476 lvm[97356]: VG ceph_vg2 finished
Feb  2 12:22:08 np0005605476 objective_wilson[97242]: {}
Feb  2 12:22:08 np0005605476 podman[97224]: 2026-02-02 17:22:08.434874433 +0000 UTC m=+0.796234061 container died e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:22:08 np0005605476 systemd[1]: libpod-e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9.scope: Deactivated successfully.
Feb  2 12:22:08 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8af36b95918f33823e5ab4e7cbe7d11e0924191c806fe2458bdcd224bdd0b6af-merged.mount: Deactivated successfully.
Feb  2 12:22:08 np0005605476 podman[97224]: 2026-02-02 17:22:08.476145807 +0000 UTC m=+0.837505395 container remove e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:08 np0005605476 systemd[1]: libpod-conmon-e23c4fde5062e17b5c3f6b3d626b2b9cb4cae308289f7abd885cba80e71be7e9.scope: Deactivated successfully.
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v73: 72 pgs: 2 unknown, 70 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 11 op/s
Feb  2 12:22:08 np0005605476 python3[97472]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:09 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mds-cephfs-compute-0-vvdoei[95610]: 2026-02-02T17:22:09.014+0000 7f29e353d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  2 12:22:09 np0005605476 ceph-mds[95614]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.023543015 +0000 UTC m=+0.035650810 container create 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:09 np0005605476 systemd[1]: Started libpod-conmon-5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef.scope.
Feb  2 12:22:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f20fdd13c70a3814cadc080ed1fa13e22737a79dde7d881567bd79b46350ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f20fdd13c70a3814cadc080ed1fa13e22737a79dde7d881567bd79b46350ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.103257634 +0000 UTC m=+0.115365449 container init 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.006872457 +0000 UTC m=+0.018980282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.110145603 +0000 UTC m=+0.122253398 container start 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.112846617 +0000 UTC m=+0.124954402 container attach 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:09 np0005605476 podman[97534]: 2026-02-02 17:22:09.162210862 +0000 UTC m=+0.051677420 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb  2 12:22:09 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:22:09 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  2 12:22:09 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:09 np0005605476 podman[97534]: 2026-02-02 17:22:09.25646436 +0000 UTC m=+0.145930938 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:09 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb  2 12:22:09 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb  2 12:22:09 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb  2 12:22:09 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164674017' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 12:22:09 np0005605476 confident_spence[97519]: 
Feb  2 12:22:09 np0005605476 confident_spence[97519]: {"fsid":"eb48d0ef-3496-563c-b73d-661fb962013e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":3,"osd_up_since":1770052888,"num_in_osds":3,"osd_in_since":1770052869,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":70},{"state_name":"unknown","count":2}],"num_pgs":72,"num_pools":10,"num_objects":30,"data_bytes":463390,"bytes_used":83881984,"bytes_avail":64328044544,"bytes_total":64411926528,"unknown_pgs_ratio":0.02777777798473835,"write_bytes_sec":3583,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":4,"btime":"2026-02-02T17:22:05:002694+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.vvdoei","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T17:21:38.658205+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb  2 12:22:09 np0005605476 systemd[1]: libpod-5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef.scope: Deactivated successfully.
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.6451022 +0000 UTC m=+0.657210015 container died 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:09 np0005605476 systemd[1]: var-lib-containers-storage-overlay-63f20fdd13c70a3814cadc080ed1fa13e22737a79dde7d881567bd79b46350ec-merged.mount: Deactivated successfully.
Feb  2 12:22:09 np0005605476 podman[97486]: 2026-02-02 17:22:09.678015763 +0000 UTC m=+0.690123558 container remove 5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef (image=quay.io/ceph/ceph:v20, name=confident_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:22:09 np0005605476 systemd[1]: libpod-conmon-5fff7441bc6fdc3335720cc665ab366390c405e0eb93f927a5eb368faccb8fef.scope: Deactivated successfully.
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:22:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb  2 12:22:10 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.22753841 +0000 UTC m=+0.047957167 container create 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:10 np0005605476 systemd[1]: Started libpod-conmon-317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c.scope.
Feb  2 12:22:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.289929523 +0000 UTC m=+0.110348290 container init 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.295567558 +0000 UTC m=+0.115986315 container start 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.202098622 +0000 UTC m=+0.022517469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.299458135 +0000 UTC m=+0.119876892 container attach 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:10 np0005605476 ecstatic_ritchie[97828]: 167 167
Feb  2 12:22:10 np0005605476 systemd[1]: libpod-317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c.scope: Deactivated successfully.
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.301156061 +0000 UTC m=+0.121574848 container died 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:22:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-834000e081f1bc481fabc89e5404686729d30ca5dc7358668254690b039382ca-merged.mount: Deactivated successfully.
Feb  2 12:22:10 np0005605476 podman[97812]: 2026-02-02 17:22:10.329687535 +0000 UTC m=+0.150106292 container remove 317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:10 np0005605476 systemd[1]: libpod-conmon-317d245523c8f444a113049faad50b1333c907da44ef38d28593b9f51c63e24c.scope: Deactivated successfully.
Feb  2 12:22:10 np0005605476 podman[97878]: 2026-02-02 17:22:10.466086099 +0000 UTC m=+0.049612853 container create 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:22:10 np0005605476 python3[97872]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:10 np0005605476 systemd[1]: Started libpod-conmon-348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81.scope.
Feb  2 12:22:10 np0005605476 podman[97878]: 2026-02-02 17:22:10.444591649 +0000 UTC m=+0.028118423 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 podman[97878]: 2026-02-02 17:22:10.570256919 +0000 UTC m=+0.153783713 container init 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:22:10 np0005605476 podman[97895]: 2026-02-02 17:22:10.571927515 +0000 UTC m=+0.043053683 container create 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:22:10 np0005605476 podman[97878]: 2026-02-02 17:22:10.579209185 +0000 UTC m=+0.162735929 container start 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:10 np0005605476 podman[97878]: 2026-02-02 17:22:10.586099284 +0000 UTC m=+0.169626078 container attach 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:22:10 np0005605476 systemd[1]: Started libpod-conmon-9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a.scope.
Feb  2 12:22:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10630cd8a6ca6c41974115a4148f2201c285f4f87cc2f7738ba2bc446d17d7ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10630cd8a6ca6c41974115a4148f2201c285f4f87cc2f7738ba2bc446d17d7ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:10 np0005605476 podman[97895]: 2026-02-02 17:22:10.645547807 +0000 UTC m=+0.116674005 container init 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:10 np0005605476 podman[97895]: 2026-02-02 17:22:10.552104781 +0000 UTC m=+0.023231009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:10 np0005605476 podman[97895]: 2026-02-02 17:22:10.650307137 +0000 UTC m=+0.121433315 container start 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:22:10 np0005605476 podman[97895]: 2026-02-02 17:22:10.653397382 +0000 UTC m=+0.124523580 container attach 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:22:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v76: 73 pgs: 1 unknown, 72 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb  2 12:22:11 np0005605476 flamboyant_leavitt[97896]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:22:11 np0005605476 flamboyant_leavitt[97896]: --> All data devices are unavailable
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 podman[97878]: 2026-02-02 17:22:11.046125954 +0000 UTC m=+0.629652718 container died 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641151849' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 12:22:11 np0005605476 vibrant_euclid[97915]: 
Feb  2 12:22:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d72d8f23756824f75405d8be05c23bae68d706e4d3b52bf345a20b4eefd8c212-merged.mount: Deactivated successfully.
Feb  2 12:22:11 np0005605476 podman[97878]: 2026-02-02 17:22:11.089160975 +0000 UTC m=+0.672687709 container remove 348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_leavitt, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 vibrant_euclid[97915]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.molmny","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb  2 12:22:11 np0005605476 podman[97895]: 2026-02-02 17:22:11.091842358 +0000 UTC m=+0.562968566 container died 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:22:11 np0005605476 podman[97895]: 2026-02-02 17:22:11.126007526 +0000 UTC m=+0.597133704 container remove 9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a (image=quay.io/ceph/ceph:v20, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-conmon-348014fdefc1e0eb3f35de97adcddd0bc0cbea6840d493065411b592b84c0e81.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-conmon-9c6caebde011c625ca67fb6467cebdca7a56a52555b1816e92a11443d6be555a.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb  2 12:22:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-10630cd8a6ca6c41974115a4148f2201c285f4f87cc2f7738ba2bc446d17d7ab-merged.mount: Deactivated successfully.
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  2 12:22:11 np0005605476 ceph-mon[75197]: from='client.? 192.168.122.100:0/3160025790' entity='client.rgw.rgw.compute-0.molmny' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 12:22:11 np0005605476 radosgw[95129]: v1 topic migration: starting v1 topic migration..
Feb  2 12:22:11 np0005605476 radosgw[95129]: v1 topic migration: finished v1 topic migration
Feb  2 12:22:11 np0005605476 radosgw[95129]: framework: beast
Feb  2 12:22:11 np0005605476 radosgw[95129]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb  2 12:22:11 np0005605476 radosgw[95129]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb  2 12:22:11 np0005605476 radosgw[95129]: starting handler: beast
Feb  2 12:22:11 np0005605476 radosgw[95129]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 12:22:11 np0005605476 radosgw[95129]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.molmny,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864288,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=a7995faa-bb92-4914-a7cd-e36c1deac625,zone_name=default,zonegroup_id=ac71a82f-f3fa-4766-a3b5-5614c8c8b06a,zonegroup_name=default}
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.526536733 +0000 UTC m=+0.049024987 container create ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:11 np0005605476 systemd[1]: Started libpod-conmon-ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf.scope.
Feb  2 12:22:11 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.589197793 +0000 UTC m=+0.111686047 container init ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.594502619 +0000 UTC m=+0.116990863 container start ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:22:11 np0005605476 optimistic_agnesi[98090]: 167 167
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.598682633 +0000 UTC m=+0.121170907 container attach ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.599178507 +0000 UTC m=+0.121666751 container died ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.511409787 +0000 UTC m=+0.033898061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a3f306a6470e89d3609a5398e86f866a1788d1c734ef75d930fafb660c175f99-merged.mount: Deactivated successfully.
Feb  2 12:22:11 np0005605476 podman[98074]: 2026-02-02 17:22:11.647698329 +0000 UTC m=+0.170186573 container remove ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:11 np0005605476 systemd[1]: libpod-conmon-ecba7608fcdaa23d482146799c5f0f2c253edfce707a6bc1578a808fe56e1caf.scope: Deactivated successfully.
Feb  2 12:22:11 np0005605476 podman[98115]: 2026-02-02 17:22:11.757672368 +0000 UTC m=+0.035474935 container create 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:11 np0005605476 systemd[1]: Started libpod-conmon-75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0.scope.
Feb  2 12:22:11 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/192918be49f4a4d0a5949043c4769d285f62f205dc27866137441253c7a13f01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/192918be49f4a4d0a5949043c4769d285f62f205dc27866137441253c7a13f01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/192918be49f4a4d0a5949043c4769d285f62f205dc27866137441253c7a13f01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/192918be49f4a4d0a5949043c4769d285f62f205dc27866137441253c7a13f01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:11 np0005605476 podman[98115]: 2026-02-02 17:22:11.742318477 +0000 UTC m=+0.020121054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:11 np0005605476 podman[98115]: 2026-02-02 17:22:11.847721461 +0000 UTC m=+0.125524048 container init 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:11 np0005605476 podman[98115]: 2026-02-02 17:22:11.855390581 +0000 UTC m=+0.133193138 container start 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:22:11 np0005605476 podman[98115]: 2026-02-02 17:22:11.858165477 +0000 UTC m=+0.135968054 container attach 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb  2 12:22:11 np0005605476 python3[98159]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.012441203 +0000 UTC m=+0.041985204 container create 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:22:12 np0005605476 systemd[1]: Started libpod-conmon-377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b.scope.
Feb  2 12:22:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c92f88f10488ac51257a64dee23ff479af3510d7bf77511584d2bb4dea256d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c92f88f10488ac51257a64dee23ff479af3510d7bf77511584d2bb4dea256d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.068576874 +0000 UTC m=+0.098120885 container init 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.073573231 +0000 UTC m=+0.103117232 container start 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.076714138 +0000 UTC m=+0.106258139 container attach 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:11.989888914 +0000 UTC m=+0.019432965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]: {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    "0": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "devices": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "/dev/loop3"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            ],
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_name": "ceph_lv0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_size": "21470642176",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "name": "ceph_lv0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "tags": {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.crush_device_class": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.encrypted": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_id": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.vdo": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.with_tpm": "0"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            },
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "vg_name": "ceph_vg0"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        }
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    ],
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    "1": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "devices": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "/dev/loop4"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            ],
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_name": "ceph_lv1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_size": "21470642176",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "name": "ceph_lv1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "tags": {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.crush_device_class": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.encrypted": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_id": "1",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.vdo": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.with_tpm": "0"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            },
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "vg_name": "ceph_vg1"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        }
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    ],
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    "2": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "devices": [
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "/dev/loop5"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            ],
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_name": "ceph_lv2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_size": "21470642176",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "name": "ceph_lv2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "tags": {
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.cluster_name": "ceph",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.crush_device_class": "",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.encrypted": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.objectstore": "bluestore",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osd_id": "2",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.vdo": "0",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:                "ceph.with_tpm": "0"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            },
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "type": "block",
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:            "vg_name": "ceph_vg2"
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:        }
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]:    ]
Feb  2 12:22:12 np0005605476 xenodochial_tesla[98144]: }
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98115]: 2026-02-02 17:22:12.149000692 +0000 UTC m=+0.426803249 container died 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:22:12 np0005605476 podman[98115]: 2026-02-02 17:22:12.186581224 +0000 UTC m=+0.464383781 container remove 75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-conmon-75619494b30465fd469a83e915676b6316b81fe14a898e8692abba2a19e92be0.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-192918be49f4a4d0a5949043c4769d285f62f205dc27866137441253c7a13f01-merged.mount: Deactivated successfully.
Feb  2 12:22:12 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb  2 12:22:12 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb  2 12:22:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb  2 12:22:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993466423' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb  2 12:22:12 np0005605476 agitated_margulis[98179]: mimic
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.528685786 +0000 UTC m=+0.558229807 container died 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6c92f88f10488ac51257a64dee23ff479af3510d7bf77511584d2bb4dea256d1-merged.mount: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98162]: 2026-02-02 17:22:12.560177231 +0000 UTC m=+0.589721232 container remove 377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b (image=quay.io/ceph/ceph:v20, name=agitated_margulis, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-conmon-377c618ae70dedf19bb17fbfe0cbacb4e00f210aa46562d9abb3daa2d5be4b7b.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.630753478 +0000 UTC m=+0.036391220 container create 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:22:12 np0005605476 systemd[1]: Started libpod-conmon-1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5.scope.
Feb  2 12:22:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v78: 73 pgs: 1 unknown, 72 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 228 B/s rd, 457 B/s wr, 1 op/s
Feb  2 12:22:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.701571163 +0000 UTC m=+0.107208925 container init 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.706498308 +0000 UTC m=+0.112136050 container start 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.613952187 +0000 UTC m=+0.019589959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.709975983 +0000 UTC m=+0.115613735 container attach 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:22:12 np0005605476 gracious_allen[98309]: 167 167
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.711646309 +0000 UTC m=+0.117284051 container died 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:22:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6f4998941beac7fd8a3c078d3f1f3a6383f34837a8f13048cdfa8432d1149270-merged.mount: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98292]: 2026-02-02 17:22:12.741141969 +0000 UTC m=+0.146779711 container remove 1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_allen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:12 np0005605476 systemd[1]: libpod-conmon-1c5d76a2e738502f5e0d5cc3b36fbe2e3d246f9e0e55eb1a4ec098eb6729b6c5.scope: Deactivated successfully.
Feb  2 12:22:12 np0005605476 podman[98334]: 2026-02-02 17:22:12.861677708 +0000 UTC m=+0.037407718 container create 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:12 np0005605476 systemd[1]: Started libpod-conmon-90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3.scope.
Feb  2 12:22:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bbf1d0cc593a838b0c2ae5109ed0bcb1780cc51bba19da73097df69dd7b768/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bbf1d0cc593a838b0c2ae5109ed0bcb1780cc51bba19da73097df69dd7b768/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bbf1d0cc593a838b0c2ae5109ed0bcb1780cc51bba19da73097df69dd7b768/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bbf1d0cc593a838b0c2ae5109ed0bcb1780cc51bba19da73097df69dd7b768/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:12 np0005605476 podman[98334]: 2026-02-02 17:22:12.931874095 +0000 UTC m=+0.107604105 container init 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:22:12 np0005605476 podman[98334]: 2026-02-02 17:22:12.939652969 +0000 UTC m=+0.115382979 container start 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:12 np0005605476 podman[98334]: 2026-02-02 17:22:12.846196873 +0000 UTC m=+0.021926943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:22:12 np0005605476 podman[98334]: 2026-02-02 17:22:12.943236317 +0000 UTC m=+0.118966327 container attach 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:22:13 np0005605476 python3[98398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb  2 12:22:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb  2 12:22:13 np0005605476 podman[98441]: 2026-02-02 17:22:13.437148758 +0000 UTC m=+0.036054531 container create 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:13 np0005605476 systemd[1]: Started libpod-conmon-7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617.scope.
Feb  2 12:22:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb37cd85ae2201ed2cb5e96a479163e1164ff56d3067912760dda8bfbae0bab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb37cd85ae2201ed2cb5e96a479163e1164ff56d3067912760dda8bfbae0bab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:13 np0005605476 podman[98441]: 2026-02-02 17:22:13.51700864 +0000 UTC m=+0.115914413 container init 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:13 np0005605476 podman[98441]: 2026-02-02 17:22:13.421263191 +0000 UTC m=+0.020168994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:13 np0005605476 podman[98441]: 2026-02-02 17:22:13.521834113 +0000 UTC m=+0.120739886 container start 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:13 np0005605476 podman[98441]: 2026-02-02 17:22:13.525074842 +0000 UTC m=+0.123980635 container attach 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:13 np0005605476 lvm[98473]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:22:13 np0005605476 lvm[98473]: VG ceph_vg0 finished
Feb  2 12:22:13 np0005605476 lvm[98475]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:22:13 np0005605476 lvm[98475]: VG ceph_vg1 finished
Feb  2 12:22:13 np0005605476 lvm[98477]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:22:13 np0005605476 lvm[98477]: VG ceph_vg2 finished
Feb  2 12:22:13 np0005605476 lvm[98478]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:22:13 np0005605476 lvm[98478]: VG ceph_vg0 finished
Feb  2 12:22:13 np0005605476 unruffled_robinson[98351]: {}
Feb  2 12:22:13 np0005605476 systemd[1]: libpod-90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3.scope: Deactivated successfully.
Feb  2 12:22:13 np0005605476 podman[98334]: 2026-02-02 17:22:13.691732167 +0000 UTC m=+0.867462177 container died 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:13 np0005605476 systemd[1]: libpod-90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3.scope: Consumed 1.000s CPU time.
Feb  2 12:22:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f3bbf1d0cc593a838b0c2ae5109ed0bcb1780cc51bba19da73097df69dd7b768-merged.mount: Deactivated successfully.
Feb  2 12:22:13 np0005605476 podman[98334]: 2026-02-02 17:22:13.726988865 +0000 UTC m=+0.902718875 container remove 90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_robinson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:22:13 np0005605476 systemd[1]: libpod-conmon-90b024959b0e3c905e904de713e39b0162b34df08087280e4406587d5f784ca3.scope: Deactivated successfully.
Feb  2 12:22:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:22:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:22:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb  2 12:22:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207711358' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb  2 12:22:14 np0005605476 objective_almeida[98466]: 
Feb  2 12:22:14 np0005605476 objective_almeida[98466]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb  2 12:22:14 np0005605476 systemd[1]: libpod-7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617.scope: Deactivated successfully.
Feb  2 12:22:14 np0005605476 podman[98441]: 2026-02-02 17:22:14.054685962 +0000 UTC m=+0.653591755 container died 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-cbb37cd85ae2201ed2cb5e96a479163e1164ff56d3067912760dda8bfbae0bab-merged.mount: Deactivated successfully.
Feb  2 12:22:14 np0005605476 podman[98441]: 2026-02-02 17:22:14.088080049 +0000 UTC m=+0.686985822 container remove 7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617 (image=quay.io/ceph/ceph:v20, name=objective_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:14 np0005605476 systemd[1]: libpod-conmon-7733f842ba8db5a3aec12e05cf8c96cc6baeac224667709419d7343949c9f617.scope: Deactivated successfully.
Feb  2 12:22:14 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb  2 12:22:14 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb  2 12:22:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v79: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 237 op/s
Feb  2 12:22:14 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:14 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:15 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb  2 12:22:15 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb  2 12:22:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v80: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 8.8 KiB/s wr, 191 op/s
Feb  2 12:22:18 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb  2 12:22:18 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb  2 12:22:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v81: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 7.6 KiB/s wr, 167 op/s
Feb  2 12:22:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb  2 12:22:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb  2 12:22:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:19 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb  2 12:22:19 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb  2 12:22:20 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb  2 12:22:20 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb  2 12:22:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v82: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Feb  2 12:22:21 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb  2 12:22:21 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb  2 12:22:22 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb  2 12:22:22 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb  2 12:22:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v83: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.6 KiB/s wr, 123 op/s
Feb  2 12:22:23 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb  2 12:22:23 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb  2 12:22:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v84: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb  2 12:22:25 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb  2 12:22:25 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb  2 12:22:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb  2 12:22:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb  2 12:22:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v85: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:27 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb  2 12:22:27 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb  2 12:22:28 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb  2 12:22:28 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb  2 12:22:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb  2 12:22:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb  2 12:22:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v86: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:29 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb  2 12:22:29 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb  2 12:22:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v87: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb  2 12:22:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb  2 12:22:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v88: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:34 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb  2 12:22:34 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb  2 12:22:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v89: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb  2 12:22:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb  2 12:22:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb  2 12:22:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb  2 12:22:35 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb  2 12:22:35 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:22:36
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:22:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v90: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:22:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:22:37 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb  2 12:22:37 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb  2 12:22:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v91: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb  2 12:22:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.163084514657912e-07 of space, bias 4.0, pg target 0.0009795701417589496 quantized to 16 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb  2 12:22:39 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 500632e4-0c3d-4938-b238-dc4e60c60f67 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v93: 73 pgs: 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:40 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.275430679s) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 95.461433411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb  2 12:22:40 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.275430679s) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 95.461433411s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:40 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev fb45bbce-fe03-44d6-9f49-a92239702524 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb  2 12:22:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 6d42314e-375d-4644-82cd-f1414005a90a (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=19/20 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=41/42 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:41 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=19/19 les/c/f=20/20/0 sis=41) [0] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:42 np0005605476 ceph-mgr[75493]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Feb  2 12:22:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v96: 104 pgs: 31 unknown, 73 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:42 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb  2 12:22:42 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  2 12:22:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb  2 12:22:43 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43 pruub=9.247305870s) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active pruub 88.983024597s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:43 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev d0e6997e-1987-4912-b08e-77e53b018a47 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:43 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43 pruub=9.247305870s) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown pruub 88.983024597s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb  2 12:22:44 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 3ace8edd-451c-4e44-ba14-ecc58ef28a53 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 43 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=9.240345955s) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 31'38 mlcod 31'38 active pruub 97.471855164s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.0( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=9.240345955s) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 31'38 mlcod 0'0 unknown pruub 97.471855164s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=20/21 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 44 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=0 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v99: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 45 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=31/32 n=6 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=15.106373787s) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 32'5 mlcod 32'5 active pruub 100.551856995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=9.238862991s) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active pruub 94.684394836s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb  2 12:22:45 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 36f4009f-8f67-424b-82cd-594711604bdd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=9.238862991s) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown pruub 94.684394836s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 45 pg[8.0( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=45 pruub=15.106373787s) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 32'5 mlcod 0'0 unknown pruub 100.551856995s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 45 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb  2 12:22:45 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb  2 12:22:45 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.18( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.7( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.3( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.17( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.8( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.5( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev d40520fe-5c52-436d-8359-53ac56c2e6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.19( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.16( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.13( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.10( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=22/23 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.7( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 32'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.3( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.a( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.17( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.8( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.5( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1e( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.19( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.16( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.13( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=31/31 les/c/f=32/32/0 sis=45) [1] r=0 lpr=45 pi=[31,45)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [1] r=0 lpr=45 pi=[22,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v102: 212 pgs: 1 peering, 93 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] update: starting ev 72e3bf65-2796-4084-a0a8-e73427556ef1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 47 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=33/34 n=210 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=47 pruub=15.113067627s) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 102.589057922s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 500632e4-0c3d-4938-b238-dc4e60c60f67 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 500632e4-0c3d-4938-b238-dc4e60c60f67 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev fb45bbce-fe03-44d6-9f49-a92239702524 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event fb45bbce-fe03-44d6-9f49-a92239702524 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 6d42314e-375d-4644-82cd-f1414005a90a (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 6d42314e-375d-4644-82cd-f1414005a90a (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev d0e6997e-1987-4912-b08e-77e53b018a47 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event d0e6997e-1987-4912-b08e-77e53b018a47 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 3ace8edd-451c-4e44-ba14-ecc58ef28a53 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 3ace8edd-451c-4e44-ba14-ecc58ef28a53 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 36f4009f-8f67-424b-82cd-594711604bdd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 36f4009f-8f67-424b-82cd-594711604bdd (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev d40520fe-5c52-436d-8359-53ac56c2e6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event d40520fe-5c52-436d-8359-53ac56c2e6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] complete: finished ev 72e3bf65-2796-4084-a0a8-e73427556ef1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 72e3bf65-2796-4084-a0a8-e73427556ef1 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 47 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=47 pruub=15.113067627s) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 102.589057922s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27cb2d00 space 0x555b27c3b440 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27cfbd80 space 0x555b28d82240 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdbc80 space 0x555b28deba40 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27cfa600 space 0x555b28143d40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd0100 space 0x555b27c3fd40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1b00 space 0x555b28e4fa40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcb300 space 0x555b28053740 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28159a00 space 0x555b28d8e540 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27d30880 space 0x555b28015140 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28159e00 space 0x555b27d95a40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bca700 space 0x555b28de4840 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcb500 space 0x555b28052e40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27cb3f00 space 0x555b28ddd740 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcaf00 space 0x555b28d8cb40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bca900 space 0x555b28de5140 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1900 space 0x555b28d85440 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdb000 space 0x555b28d83740 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1700 space 0x555b28df6b40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcbc80 space 0x555b28d8f740 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcad80 space 0x555b27d93440 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28158600 space 0x555b28e02840 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bca180 space 0x555b28d8ee40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27baee80 space 0x555b280fbd40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1780 space 0x555b28d92540 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b273bb100 space 0x555b28049a40 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bca500 space 0x555b27d93d40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdaf80 space 0x555b28d82e40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdbc00 space 0x555b28e03d40 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd0f80 space 0x555b28df7440 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1a00 space 0x555b28e5eb40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28159580 space 0x555b28143140 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28159100 space 0x555b28d8dd40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b2724f380 space 0x555b26fef440 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27ad3c80 space 0x555b28d85d40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27cb2900 space 0x555b27d94840 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28158400 space 0x555b28dea840 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27c5f800 space 0x555b27d95140 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdba00 space 0x555b28df7d40 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcaf80 space 0x555b27c3eb40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28158b00 space 0x555b28e4ee40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27d30c80 space 0x555b28015a40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28159500 space 0x555b28d8d440 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27d23b00 space 0x555b27c3a540 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcb780 space 0x555b280fb440 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27c6bf00 space 0x555b26ffd440 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27d23180 space 0x555b26feeb40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1a80 space 0x555b28048840 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27c6bb80 space 0x555b280fab40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bdaf00 space 0x555b26fee240 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28158800 space 0x555b28e03140 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd0f00 space 0x555b26fefa40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcba80 space 0x555b28df6240 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27c6b000 space 0x555b28e5fa40 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27d3aa00 space 0x555b27c3e540 0x0~9a clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bb5900 space 0x555b28014840 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b28158780 space 0x555b28d92e40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bb4e00 space 0x555b27280540 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bd1480 space 0x555b28049140 0x0~98 clean)
Feb  2 12:22:47 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x555b274e9d40) split_cache   moving buffer(0x555b27bcab00 space 0x555b28de5a40 0x0~6e clean)
Feb  2 12:22:47 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb  2 12:22:47 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb  2 12:22:47 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 15 completed events
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:22:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 47 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=35/36 n=9 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=47 pruub=8.511964798s) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 92.934394836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:47 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 47 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=47 pruub=8.511964798s) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 92.934394836s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 48 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=33/33 les/c/f=34/34/0 sis=47) [1] r=0 lpr=47 pi=[33,47)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 48 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=35/35 les/c/f=36/36/0 sis=47) [2] r=0 lpr=47 pi=[35,47)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb  2 12:22:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb  2 12:22:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v105: 274 pgs: 1 peering, 155 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 12:22:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb  2 12:22:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb  2 12:22:49 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=49 pruub=9.022392273s) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active pruub 98.633140564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:49 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=49 pruub=9.022392273s) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown pruub 98.633140564s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:49 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb  2 12:22:49 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb  2 12:22:49 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb  2 12:22:49 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb  2 12:22:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb  2 12:22:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb  2 12:22:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=37/38 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=37/37 les/c/f=38/38/0 sis=49) [1] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb  2 12:22:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb  2 12:22:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:53 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb  2 12:22:53 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:54 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb  2 12:22:54 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb  2 12:22:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:22:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 python3[98576]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:55 np0005605476 podman[98577]: 2026-02-02 17:22:55.103366335 +0000 UTC m=+0.052627726 container create 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:55 np0005605476 systemd[76566]: Starting Mark boot as successful...
Feb  2 12:22:55 np0005605476 systemd[76566]: Finished Mark boot as successful.
Feb  2 12:22:55 np0005605476 systemd[1]: Started libpod-conmon-0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7.scope.
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:22:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb  2 12:22:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b88adc4382029e1942f455cff3107a7e42c3c568a3520dfa5fedaf49041682/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b88adc4382029e1942f455cff3107a7e42c3c568a3520dfa5fedaf49041682/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.893836021s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.250900269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849604607s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.206710815s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849595070s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.206748962s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849574089s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.206710815s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849556923s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.206748962s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.893672943s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.250900269s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.848975182s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.206535339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.848914146s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.206535339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849090576s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.206703186s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.848848343s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.206535339s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.899322510s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.257057190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.848893166s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.206535339s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.898212433s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256088257s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.898195267s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256088257s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.898092270s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256057739s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.898075104s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256057739s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847372055s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205406189s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847360611s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205406189s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847274780s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205383301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847265244s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205383301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847140312s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205383301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897998810s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256248474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.847114563s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205383301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897967339s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256248474s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846983910s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205352783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.899301529s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.257057190s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846962929s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205352783s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846713066s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205261230s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846699715s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205261230s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846652985s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205268860s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846633911s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205268860s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846785545s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205444336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846611023s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205276489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846770287s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205444336s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846595764s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205276489s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897915840s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256675720s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897904396s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256675720s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897861481s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256683350s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846359253s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205207825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846391678s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205245972s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897839546s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256683350s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846380234s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205245972s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846340179s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205207825s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.849060059s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.206703186s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846124649s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205078125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846115112s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205078125s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897768021s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 active pruub 113.256797791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=13.897756577s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 113.256797791s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.846017838s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205093384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.845996857s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205093384s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.844780922s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.203903198s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.845914841s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205039978s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.845893860s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205039978s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.844762802s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.203903198s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.845827103s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 active pruub 110.205131531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=10.845788002s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 unknown NOTIFY pruub 110.205131531s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.12( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924612045s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.811691284s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.872395515s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759536743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.872378349s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759536743s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924438477s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.811660767s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924400330s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.811660767s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.872075081s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759529114s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871872902s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759475708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871849060s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759475708s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924077034s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.811759949s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924060822s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.811759949s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871736526s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759529114s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871435165s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759452820s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871415138s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759452820s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871231079s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759468079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.928195953s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816543579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.12( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.924552917s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.811691284s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.928180695s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816543579s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.871076584s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759468079s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927987099s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816406250s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927968025s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816406250s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870796204s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759277344s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870783806s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759277344s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927803040s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816398621s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.9( v 49'19 (0'0,49'19] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927906036s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.816520691s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927775383s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816398621s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.9( v 49'19 (0'0,49'19] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927881241s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.816520691s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870247841s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759262085s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870226860s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759262085s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870106697s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759178162s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.e( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927471161s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.816551208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.870071411s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759178162s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.e( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927422523s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.816551208s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927223206s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816490173s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927021980s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816490173s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.14( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927080154s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.816589355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927034378s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816574097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927016258s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816574097s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.14( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927021980s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.816589355s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927049637s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816680908s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927034378s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816680908s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926907539s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816635132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869387627s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759147644s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926891327s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816635132s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869370461s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759147644s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869223595s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759025574s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869211197s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759025574s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869052887s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758979797s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.d( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926692963s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.816635132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869034767s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758979797s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.d( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926623344s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.816635132s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869013786s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759086609s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926476479s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816642761s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926461220s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816642761s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868837357s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759063721s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868817329s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759063721s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868921280s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759086609s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868395805s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758804321s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868378639s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758804321s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868298531s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758850098s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926292419s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816856384s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868280411s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758850098s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926271439s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816856384s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.928186417s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.818771362s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.928148270s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.818771362s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867793083s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758537292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867777824s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758537292s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.869153976s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759910583s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926115036s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816886902s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.926100731s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816886902s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868676186s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.759544373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868659973s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759544373s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.15( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927927017s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 active pruub 100.818840027s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.15( v 49'19 (0'0,49'19] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927906990s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 100.818840027s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867640495s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758628845s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867626190s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758628845s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927669525s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.818840027s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927650452s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.818840027s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.868745804s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.759910583s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927399635s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.818763733s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=47/48 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.927382469s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.818763733s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867099762s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 active pruub 104.758552551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=12.867081642s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 unknown NOTIFY pruub 104.758552551s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.922873497s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 active pruub 100.816390991s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=47/48 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.922806740s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 100.816390991s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.940213203s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523231506s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.940189362s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523231506s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.916461945s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.499626160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880593300s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463768005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.916425705s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.499626160s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880570412s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463737488s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880743027s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.464027405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880723953s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.464027405s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880434036s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463768005s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880435944s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463737488s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.939706802s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523200989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.939674377s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523200989s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880097389s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463722229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.880069733s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463722229s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.915785789s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.499504089s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.915760040s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.499504089s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.879807472s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463706970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.879783630s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463706970s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.879198074s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463684082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.879143715s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463684082s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.878317833s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463691711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.878296852s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463691711s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937502861s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523277283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937485695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523277283s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.913521767s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.499481201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.913500786s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.499481201s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937210083s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523330688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937191010s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523330688s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 podman[98577]: 2026-02-02 17:22:55.174673823 +0000 UTC m=+0.123935234 container init 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937251091s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523559570s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.937231064s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523559570s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.877093315s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463539124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.877076149s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463539124s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876830101s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463439941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876811981s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463439941s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.936676025s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523460388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.912796974s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.499610901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876402855s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463264465s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.936655998s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523460388s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876380920s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463264465s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.912747383s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.499610901s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.910194397s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.497261047s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876292229s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463356018s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.910180092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.497261047s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.876266479s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463356018s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.909941673s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.497200012s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.909922600s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.497200012s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.875865936s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463241577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.875761032s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463241577s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.909027100s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.497192383s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.908985138s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.497192383s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.935348511s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523468018s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.935018539s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523468018s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.874385834s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463157654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.874341965s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463165283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.908334732s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 39'483 active pruub 104.497222900s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.874175072s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463088989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.874153137s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463088989s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.908290863s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 104.497222900s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.874267578s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463165283s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934499741s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523605347s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934413910s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523559570s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934391975s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523559570s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.910295486s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.499542236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.910275459s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.499542236s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934453964s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523765564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934433937s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523765564s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873741150s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463157654s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.907487869s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.497024536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873335838s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.462928772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.907459259s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.497024536s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873381615s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463035583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873270988s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.462928772s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934120178s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523834229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933905602s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523696899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934082985s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523834229s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.872945786s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.462768555s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933882713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523696899s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.872925758s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.462768555s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873314857s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463035583s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873044968s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.463050842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.873020172s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.463050842s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933590889s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523696899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870741844s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460891724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933571815s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523696899s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870714188s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460891724s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870677948s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460914612s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870642662s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460914612s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 podman[98577]: 2026-02-02 17:22:55.084185248 +0000 UTC m=+0.033446659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.906413078s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.496803284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.906382561s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.496803284s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933227539s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523727417s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870179176s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460708618s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933138847s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523765564s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870158195s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460708618s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933117867s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523765564s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.933025360s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523727417s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869945526s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460723877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.934477806s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523605347s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870040894s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460823059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.870011330s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460823059s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.906057358s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.497070312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.905986786s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.497070312s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869917870s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460723877s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.872398376s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.463668823s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869478226s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460823059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.932436943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.523803711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.872319221s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.463668823s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869448662s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460823059s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.932402611s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.523803711s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869153023s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460540771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.869103432s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460540771s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 podman[98577]: 2026-02-02 17:22:55.180191224 +0000 UTC m=+0.129452625 container start 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.868349075s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460693359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.868173599s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460533142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.868325233s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460693359s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.868152618s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460533142s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867724419s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460311890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867699623s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460311890s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931982994s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524673462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.904089928s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.496788025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931958199s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524673462s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.904026985s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.496788025s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867371559s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460289001s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867352486s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460289001s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931840897s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524864197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867053986s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 active pruub 110.460083008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931816101s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524864197s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866884232s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.459953308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=45/46 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866865158s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.459953308s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.867032051s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 unknown NOTIFY pruub 110.460083008s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.903099060s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.496246338s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.903023720s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.496246338s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931406975s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524703979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931384087s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524703979s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866713524s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460197449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931200981s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524703979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866686821s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460197449s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931178093s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524703979s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866335869s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.459953308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866717339s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460334778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.931166649s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524879456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.902485847s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.496200562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866155624s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.459953308s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866556168s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460334778s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.865700722s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.459724426s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930813789s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524879456s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.865680695s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.459724426s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.902079582s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.496200562s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930583000s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524909973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930551529s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524909973s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866461754s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.460891724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.866442680s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.460891724s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.865077972s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.459548950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930458069s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524940491s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.865051270s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.459548950s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930432320s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524940491s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.897363663s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 active pruub 104.491966248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.897274017s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 104.491966248s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930157661s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 active pruub 106.524902344s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.930136681s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 unknown NOTIFY pruub 106.524902344s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.864578247s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 active pruub 110.459480286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 51 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=45/46 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.864557266s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 110.459480286s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 podman[98577]: 2026-02-02 17:22:55.185338345 +0000 UTC m=+0.134599736 container attach 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb  2 12:22:55 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb  2 12:22:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=51/52 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.9( v 49'19 lc 36'8 (0'0,49'19] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=51/52 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=51/52 n=1 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.15( v 49'19 lc 36'3 (0'0,49'19] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.e( v 49'19 lc 36'4 (0'0,49'19] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=51/52 n=0 ec=45/22 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=49/37 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=51/52 n=0 ec=45/31 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=32'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 52 pg[10.d( v 49'19 lc 36'5 (0'0,49'19] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.12( v 49'19 lc 39'17 (0'0,49'19] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.14( v 49'19 lc 36'7 (0'0,49'19] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=49'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=51/52 n=1 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=51/52 n=0 ec=41/19 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=51/52 n=0 ec=43/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 52 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=51/52 n=0 ec=47/35 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 16 unknown, 49 peering, 240 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb  2 12:22:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb  2 12:22:57 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=49'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:57 np0005605476 jovial_chaplygin[98594]: could not fetch user info: no user info saved
Feb  2 12:22:57 np0005605476 systemd[1]: libpod-0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7.scope: Deactivated successfully.
Feb  2 12:22:57 np0005605476 podman[98577]: 2026-02-02 17:22:57.761810361 +0000 UTC m=+2.711071762 container died 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:22:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-10b88adc4382029e1942f455cff3107a7e42c3c568a3520dfa5fedaf49041682-merged.mount: Deactivated successfully.
Feb  2 12:22:57 np0005605476 podman[98577]: 2026-02-02 17:22:57.800148283 +0000 UTC m=+2.749409684 container remove 0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7 (image=quay.io/ceph/ceph:v20, name=jovial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:22:57 np0005605476 systemd[1]: libpod-conmon-0ecfeaa2551e4c08c9b73ec3fbfd328b9df88730e0bd366d3d665a66034925e7.scope: Deactivated successfully.
Feb  2 12:22:58 np0005605476 python3[98718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid eb48d0ef-3496-563c-b73d-661fb962013e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:22:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb  2 12:22:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb  2 12:22:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.425810814s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 active pruub 114.024215698s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.11( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.432170868s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=53'484 lcod 53'484 active pruub 114.030685425s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.11( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.432047844s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 114.030685425s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.425734520s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.024215698s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1b( v 53'484 (0'0,53'484] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.432453156s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 active pruub 114.031204224s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.17( v 53'484 (0'0,53'484] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431441307s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 active pruub 114.030258179s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1b( v 53'484 (0'0,53'484] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.432389259s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 114.031204224s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431645393s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 active pruub 114.030609131s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.17( v 53'484 (0'0,53'484] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431371689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 114.030258179s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431384087s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 active pruub 114.030479431s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431328773s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030479431s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431596756s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030609131s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.5( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431369781s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=49'484 lcod 49'484 active pruub 114.030746460s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.5( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.431259155s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=49'484 lcod 49'484 unknown NOTIFY pruub 114.030746460s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430315971s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 active pruub 114.030303955s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.f( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430422783s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 active pruub 114.030532837s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430219650s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030303955s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.f( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430359840s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 114.030532837s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.7( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430651665s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=53'484 lcod 53'484 active pruub 114.031112671s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 54 pg[9.7( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.430615425s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 114.031112671s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.f( v 53'484 (0'0,53'484] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.f( v 53'484 (0'0,53'484] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.17( v 53'484 (0'0,53'484] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.17( v 53'484 (0'0,53'484] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.5( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=49'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.5( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=49'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.11( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.11( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.7( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.7( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1b( v 53'484 (0'0,53'484] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1b( v 53'484 (0'0,53'484] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:58 np0005605476 podman[98719]: 2026-02-02 17:22:58.201159963 +0000 UTC m=+0.053947452 container create 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:22:58 np0005605476 systemd[1]: Started libpod-conmon-97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b.scope.
Feb  2 12:22:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:22:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26fec90481d117e0e5b44a7cd6556cd80734692b116fe3305a7f18450ed95cc8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26fec90481d117e0e5b44a7cd6556cd80734692b116fe3305a7f18450ed95cc8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:22:58 np0005605476 podman[98719]: 2026-02-02 17:22:58.167807597 +0000 UTC m=+0.020595136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 12:22:58 np0005605476 podman[98719]: 2026-02-02 17:22:58.276303216 +0000 UTC m=+0.129090705 container init 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:22:58 np0005605476 podman[98719]: 2026-02-02 17:22:58.281313794 +0000 UTC m=+0.134101253 container start 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:22:58 np0005605476 podman[98719]: 2026-02-02 17:22:58.284285715 +0000 UTC m=+0.137073174 container attach 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:22:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 16 unknown, 49 peering, 240 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:22:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:22:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb  2 12:22:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb  2 12:22:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.1( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.1( v 53'485 (0'0,53'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.588911057s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 active pruub 114.030624390s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.588808060s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030624390s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.19( v 54'486 (0'0,54'486] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=53'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.19( v 54'486 (0'0,54'486] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.587700844s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 active pruub 114.030540466s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.587636948s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030540466s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.587656021s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 active pruub 114.030761719s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.1( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.588010788s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=53'484 lcod 53'484 active pruub 114.031272888s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.587491035s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.030761719s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.1( v 53'485 (0'0,53'485] local-lis/les=52/53 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.587939262s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 114.031272888s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.19( v 54'486 (0'0,54'486] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.588951111s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=53'485 lcod 53'485 active pruub 114.032539368s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.19( v 54'486 (0'0,54'486] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.588909149s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'485 lcod 53'485 unknown NOTIFY pruub 114.032539368s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.586553574s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=53'484 lcod 53'484 active pruub 114.030479431s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:22:59 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 55 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=52/53 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.586509705s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 114.030479431s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.1b( v 53'484 (0'0,53'484] local-lis/les=54/55 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.11( v 53'485 (0'0,53'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.7( v 53'485 (0'0,53'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.5( v 53'485 (0'0,53'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.f( v 53'484 (0'0,53'484] local-lis/les=54/55 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 55 pg[9.17( v 53'484 (0'0,53'484] local-lis/les=54/55 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=53'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb  2 12:22:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.19( v 54'486 (0'0,54'486] local-lis/les=55/56 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=54'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.1( v 53'485 (0'0,53'485] local-lis/les=55/56 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=55/56 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=55/56 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=55/56 n=7 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 56 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=55/56 n=6 ec=47/33 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=54'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]: {
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "user_id": "openstack",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "display_name": "openstack",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "email": "",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "suspended": 0,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "max_buckets": 1000,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "subusers": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "keys": [
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        {
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:            "user": "openstack",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:            "access_key": "JD94WH1DFC5NTQHDJ35R",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:            "secret_key": "rv4XbDsooyYQv4Yu4jFDrSdV8hTAjg7b0YVjtSdX",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:            "active": true,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:            "create_date": "2026-02-02T17:23:00.084517Z"
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        }
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    ],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "swift_keys": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "caps": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "op_mask": "read, write, delete",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "default_placement": "",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "default_storage_class": "",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "placement_tags": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "bucket_quota": {
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "enabled": false,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "check_on_raw": false,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_size": -1,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_size_kb": 0,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_objects": -1
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    },
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "user_quota": {
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "enabled": false,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "check_on_raw": false,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_size": -1,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_size_kb": 0,
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:        "max_objects": -1
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    },
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "temp_url_keys": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "type": "rgw",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "mfa_ids": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "account_id": "",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "path": "/",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "create_date": "2026-02-02T17:23:00.084166Z",
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "tags": [],
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]:    "group_ids": []
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]: }
Feb  2 12:23:00 np0005605476 unruffled_mcclintock[98734]: 
Feb  2 12:23:00 np0005605476 systemd[1]: libpod-97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b.scope: Deactivated successfully.
Feb  2 12:23:00 np0005605476 conmon[98734]: conmon 97d398b20129fa67e27f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b.scope/container/memory.events
Feb  2 12:23:00 np0005605476 podman[98719]: 2026-02-02 17:23:00.116138129 +0000 UTC m=+1.968925598 container died 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:23:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-26fec90481d117e0e5b44a7cd6556cd80734692b116fe3305a7f18450ed95cc8-merged.mount: Deactivated successfully.
Feb  2 12:23:00 np0005605476 podman[98719]: 2026-02-02 17:23:00.148734774 +0000 UTC m=+2.001522223 container remove 97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b (image=quay.io/ceph/ceph:v20, name=unruffled_mcclintock, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:23:00 np0005605476 systemd[1]: libpod-conmon-97d398b20129fa67e27f1f6c667ad7606d4017d857d1ea559109dcb3ed10d42b.scope: Deactivated successfully.
Feb  2 12:23:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 8.2 KiB/s wr, 241 op/s; 1.6 KiB/s, 2 keys/s, 30 objects/s recovering
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  2 12:23:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb  2 12:23:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.533021927s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 active pruub 121.256851196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.532968521s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.256851196s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.532478333s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 active pruub 121.256683350s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.532416344s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 active pruub 121.256660461s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.532369614s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.256660461s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.532421112s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.256683350s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.531240463s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 active pruub 121.255935669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 57 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57 pruub=15.531173706s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.255935669s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:01 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 57 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:01 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 57 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:01 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 57 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:01 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 57 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb  2 12:23:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb  2 12:23:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 58 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 58 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 58 pg[6.e( v 32'39 lc 31'19 (0'0,32'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 58 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:02 np0005605476 ceph-mgr[75493]: [progress INFO root] Completed event 684d3dfd-a343-4d6b-bd2b-acdfd468f111 (Global Recovery Event) in 20 seconds
Feb  2 12:23:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 8.2 KiB/s wr, 241 op/s; 1.6 KiB/s, 2 keys/s, 30 objects/s recovering
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  2 12:23:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 12:23:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 12:23:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb  2 12:23:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 12:23:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 7.5 KiB/s wr, 185 op/s; 1.2 KiB/s, 24 objects/s recovering
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  2 12:23:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.491985321s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 active pruub 120.602592468s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.491925240s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 unknown NOTIFY pruub 120.602592468s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.491498947s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 active pruub 120.602203369s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.491445541s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 unknown NOTIFY pruub 120.602203369s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.490975380s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 active pruub 120.602195740s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.490944862s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 unknown NOTIFY pruub 120.602195740s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.490311623s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 active pruub 120.601768494s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 59 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=15.490279198s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=32'39 unknown NOTIFY pruub 120.601768494s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb  2 12:23:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60 pruub=11.766826630s) [1] r=-1 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 active pruub 121.256401062s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60 pruub=11.766792297s) [1] r=-1 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.256401062s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60 pruub=11.766260147s) [1] r=-1 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 active pruub 121.255958557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=43/45 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60 pruub=11.766241074s) [1] r=-1 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 121.255958557s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:05 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60) [1] r=0 lpr=60 pi=[43,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:05 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60) [1] r=0 lpr=60 pi=[43,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=59/60 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:05 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 60 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb  2 12:23:06 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 12:23:06 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 12:23:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb  2 12:23:06 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb  2 12:23:06 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 61 pg[6.c( v 32'39 lc 31'17 (0'0,32'39] local-lis/les=60/61 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60) [1] r=0 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:06 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 61 pg[6.4( v 32'39 lc 31'15 (0'0,32'39] local-lis/les=60/61 n=2 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=60) [1] r=0 lpr=60 pi=[43,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 459 B/s wr, 6 op/s; 3/251 objects degraded (1.195%); 137 B/s, 1 objects/s recovering
Feb  2 12:23:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3/251 objects degraded (1.195%), 2 pgs degraded (PG_DEGRADED)
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:07 np0005605476 ceph-mgr[75493]: [progress INFO root] Writing back 16 completed events
Feb  2 12:23:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 12:23:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:08 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb  2 12:23:08 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb  2 12:23:08 np0005605476 ceph-mon[75197]: Health check failed: Degraded data redundancy: 3/251 objects degraded (1.195%), 2 pgs degraded (PG_DEGRADED)
Feb  2 12:23:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb  2 12:23:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb  2 12:23:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 5 op/s; 3/251 objects degraded (1.195%); 102 B/s, 0 objects/s recovering
Feb  2 12:23:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:09 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb  2 12:23:09 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb  2 12:23:10 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb  2 12:23:10 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb  2 12:23:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb  2 12:23:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb  2 12:23:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 274 B/s wr, 4 op/s; 304 B/s, 1 keys/s, 1 objects/s recovering
Feb  2 12:23:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  2 12:23:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 12:23:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  2 12:23:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/251 objects degraded (1.195%), 2 pgs degraded)
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb  2 12:23:11 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb  2 12:23:11 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 62 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.862531662s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=32'39 active pruub 120.600860596s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:11 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 62 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.862441063s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=32'39 unknown NOTIFY pruub 120.600860596s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 12:23:11 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 12:23:11 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 62 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.862798691s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=32'39 active pruub 120.601654053s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:11 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 62 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=51/52 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.862712860s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=32'39 unknown NOTIFY pruub 120.601654053s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:11 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:11 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:11 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb  2 12:23:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb  2 12:23:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/251 objects degraded (1.195%), 2 pgs degraded)
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: Cluster is now healthy
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 12:23:12 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 63 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=62/63 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:12 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 63 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=62/63 n=2 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:12 np0005605476 systemd-logind[799]: New session 33 of user zuul.
Feb  2 12:23:12 np0005605476 systemd[1]: Started Session 33 of User zuul.
Feb  2 12:23:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 1 keys/s, 1 objects/s recovering
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  2 12:23:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 12:23:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 12:23:13 np0005605476 python3.9[98985]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 275 B/s, 1 keys/s, 1 objects/s recovering
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  2 12:23:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.854577334 +0000 UTC m=+0.034811532 container create 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:23:14 np0005605476 systemd[1]: Started libpod-conmon-5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7.scope.
Feb  2 12:23:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.839482158 +0000 UTC m=+0.019716376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.945814804 +0000 UTC m=+0.126049022 container init 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.952169653 +0000 UTC m=+0.132403851 container start 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.955077065 +0000 UTC m=+0.135311283 container attach 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:23:14 np0005605476 bold_swanson[99308]: 167 167
Feb  2 12:23:14 np0005605476 systemd[1]: libpod-5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7.scope: Deactivated successfully.
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.958475451 +0000 UTC m=+0.138709659 container died 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:23:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6a9c88aadeb96d4f71eb3d445fd7345904c9adfc25fcd08cd973c0b3ed98ca38-merged.mount: Deactivated successfully.
Feb  2 12:23:14 np0005605476 podman[99272]: 2026-02-02 17:23:14.991025448 +0000 UTC m=+0.171259646 container remove 5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:15 np0005605476 systemd[1]: libpod-conmon-5cd90842a57b48ecd207ce3c8913c56291d044ffc83491a2770759e611a25bc7.scope: Deactivated successfully.
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.114586448 +0000 UTC m=+0.035136551 container create 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:23:15 np0005605476 systemd[1]: Started libpod-conmon-2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa.scope.
Feb  2 12:23:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.170486583 +0000 UTC m=+0.091036686 container init 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.177613164 +0000 UTC m=+0.098163267 container start 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:23:15 np0005605476 python3.9[99380]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.192260046 +0000 UTC m=+0.112810179 container attach 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.097712953 +0000 UTC m=+0.018263076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:15 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb  2 12:23:15 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb  2 12:23:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb  2 12:23:15 np0005605476 wizardly_napier[99403]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:23:15 np0005605476 wizardly_napier[99403]: --> All data devices are unavailable
Feb  2 12:23:15 np0005605476 systemd[1]: libpod-2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa.scope: Deactivated successfully.
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.576466039 +0000 UTC m=+0.497016152 container died 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-83103150583a8e7a07796922517823f0f351c226f876a30bd0e06c967d4235c6-merged.mount: Deactivated successfully.
Feb  2 12:23:15 np0005605476 podman[99386]: 2026-02-02 17:23:15.613149572 +0000 UTC m=+0.533699725 container remove 2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:23:15 np0005605476 systemd[1]: libpod-conmon-2b5aeb102ec5a56388d72589f967159e525c1037117c49971268b8ecda66c0fa.scope: Deactivated successfully.
Feb  2 12:23:15 np0005605476 podman[99505]: 2026-02-02 17:23:15.998974811 +0000 UTC m=+0.032110485 container create faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:23:16 np0005605476 systemd[1]: Started libpod-conmon-faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3.scope.
Feb  2 12:23:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:16.055907315 +0000 UTC m=+0.089043089 container init faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:16.06071703 +0000 UTC m=+0.093852704 container start faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:16.063850779 +0000 UTC m=+0.096986483 container attach faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:23:16 np0005605476 optimistic_carson[99521]: 167 167
Feb  2 12:23:16 np0005605476 systemd[1]: libpod-faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3.scope: Deactivated successfully.
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:16.065361811 +0000 UTC m=+0.098497485 container died faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:15.9861421 +0000 UTC m=+0.019277784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e5e47f8c255faee89092d5ac5ad4db77d6020fdd2dadbc467b4a26e376e9ac8b-merged.mount: Deactivated successfully.
Feb  2 12:23:16 np0005605476 podman[99505]: 2026-02-02 17:23:16.101206431 +0000 UTC m=+0.134342125 container remove faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 12:23:16 np0005605476 systemd[1]: libpod-conmon-faae1d068815b14fef2af572b97822b42b45a430778dcb8b156042597a7041a3.scope: Deactivated successfully.
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.970163345s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=39'483 lcod 0'0 active pruub 128.497146606s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.969747543s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=54'488 lcod 54'488 active pruub 128.497070312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.969801903s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 128.497146606s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.969675064s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=54'488 lcod 54'488 unknown NOTIFY pruub 128.497070312s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.969534874s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=39'483 lcod 0'0 active pruub 128.497055054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.969496727s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 128.497055054s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 64 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.964838028s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=53'484 lcod 53'484 active pruub 128.492660522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 65 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.964798927s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 128.492660522s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=65 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.229203717 +0000 UTC m=+0.045262976 container create fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:23:16 np0005605476 systemd[1]: Started libpod-conmon-fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6.scope.
Feb  2 12:23:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092345ae516095258a1eb52731e86c6e80772c023506d43ba82038514fe827d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092345ae516095258a1eb52731e86c6e80772c023506d43ba82038514fe827d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092345ae516095258a1eb52731e86c6e80772c023506d43ba82038514fe827d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0092345ae516095258a1eb52731e86c6e80772c023506d43ba82038514fe827d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.20660684 +0000 UTC m=+0.022666149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.314259873 +0000 UTC m=+0.130319172 container init fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.321460276 +0000 UTC m=+0.137519545 container start fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.325338895 +0000 UTC m=+0.141398234 container attach fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.636150360s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=55'486 lcod 55'486 active pruub 135.244735718s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.635990143s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=39'483 active pruub 135.244949341s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.635951042s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=39'483 unknown NOTIFY pruub 135.244949341s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.639225006s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=53'484 lcod 53'484 active pruub 135.248397827s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.639188766s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 135.248397827s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.639175415s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=53'484 lcod 53'484 active pruub 135.248474121s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.639151573s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 135.248474121s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 65 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=14.635049820s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=55'486 lcod 55'486 unknown NOTIFY pruub 135.244735718s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[54,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[47,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=54'488 lcod 54'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=54'488 lcod 54'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=53'484 lcod 53'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 66 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=54/55 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:16 np0005605476 pensive_napier[99561]: {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    "0": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "devices": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "/dev/loop3"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            ],
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_name": "ceph_lv0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_size": "21470642176",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "name": "ceph_lv0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "tags": {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_name": "ceph",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.crush_device_class": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.encrypted": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.objectstore": "bluestore",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_id": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.vdo": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.with_tpm": "0"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            },
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "vg_name": "ceph_vg0"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        }
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    ],
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    "1": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "devices": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "/dev/loop4"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            ],
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_name": "ceph_lv1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_size": "21470642176",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "name": "ceph_lv1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "tags": {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_name": "ceph",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.crush_device_class": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.encrypted": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.objectstore": "bluestore",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_id": "1",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.vdo": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.with_tpm": "0"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            },
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "vg_name": "ceph_vg1"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        }
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    ],
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    "2": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "devices": [
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "/dev/loop5"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            ],
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_name": "ceph_lv2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_size": "21470642176",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "name": "ceph_lv2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "tags": {
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.cluster_name": "ceph",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.crush_device_class": "",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.encrypted": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.objectstore": "bluestore",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osd_id": "2",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.vdo": "0",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:                "ceph.with_tpm": "0"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            },
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "type": "block",
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:            "vg_name": "ceph_vg2"
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:        }
Feb  2 12:23:16 np0005605476 pensive_napier[99561]:    ]
Feb  2 12:23:16 np0005605476 pensive_napier[99561]: }
Feb  2 12:23:16 np0005605476 systemd[1]: libpod-fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6.scope: Deactivated successfully.
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.628463104 +0000 UTC m=+0.444522343 container died fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:23:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0092345ae516095258a1eb52731e86c6e80772c023506d43ba82038514fe827d-merged.mount: Deactivated successfully.
Feb  2 12:23:16 np0005605476 podman[99544]: 2026-02-02 17:23:16.665718254 +0000 UTC m=+0.481777483 container remove fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_napier, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:16 np0005605476 systemd[1]: libpod-conmon-fc2e71595f4d4b4ed673b578eb38146d8ba769836d763ed850870db9f0de29c6.scope: Deactivated successfully.
Feb  2 12:23:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  2 12:23:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.090679435 +0000 UTC m=+0.049117455 container create d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:23:17 np0005605476 systemd[1]: Started libpod-conmon-d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7.scope.
Feb  2 12:23:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.16505402 +0000 UTC m=+0.123492080 container init d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.075849547 +0000 UTC m=+0.034287577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.17355741 +0000 UTC m=+0.131995430 container start d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:23:17 np0005605476 systemd[1]: libpod-d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7.scope: Deactivated successfully.
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.178381466 +0000 UTC m=+0.136819496 container attach d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:23:17 np0005605476 competent_leakey[99663]: 167 167
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.179405805 +0000 UTC m=+0.137843815 container died d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:23:17 np0005605476 conmon[99663]: conmon d7efee59eea30df80695 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7.scope/container/memory.events
Feb  2 12:23:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-66fcad6505b3cb375ce8859cb87d8826a400cf7efbfc6ef7701b049976a72549-merged.mount: Deactivated successfully.
Feb  2 12:23:17 np0005605476 podman[99646]: 2026-02-02 17:23:17.206799936 +0000 UTC m=+0.165237946 container remove d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:23:17 np0005605476 systemd[1]: libpod-conmon-d7efee59eea30df806956018a7e1a4abae81116b802a0ad8be86b57ae21332b7.scope: Deactivated successfully.
Feb  2 12:23:17 np0005605476 podman[99686]: 2026-02-02 17:23:17.324737659 +0000 UTC m=+0.032995771 container create 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:23:17 np0005605476 systemd[1]: Started libpod-conmon-01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61.scope.
Feb  2 12:23:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:23:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc458c67f8602c54e3116d330397ecafef3ce69728535de6d620f65719bf07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc458c67f8602c54e3116d330397ecafef3ce69728535de6d620f65719bf07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc458c67f8602c54e3116d330397ecafef3ce69728535de6d620f65719bf07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc458c67f8602c54e3116d330397ecafef3ce69728535de6d620f65719bf07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:23:17 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb  2 12:23:17 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb  2 12:23:17 np0005605476 podman[99686]: 2026-02-02 17:23:17.398185498 +0000 UTC m=+0.106443640 container init 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:23:17 np0005605476 podman[99686]: 2026-02-02 17:23:17.402721556 +0000 UTC m=+0.110979668 container start 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:23:17 np0005605476 podman[99686]: 2026-02-02 17:23:17.405843904 +0000 UTC m=+0.114102036 container attach 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:23:17 np0005605476 podman[99686]: 2026-02-02 17:23:17.30915815 +0000 UTC m=+0.017416292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb  2 12:23:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=10.577649117s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=55'486 lcod 55'486 active pruub 128.499694824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=10.577599525s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=55'486 lcod 55'486 unknown NOTIFY pruub 128.499694824s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=10.574044228s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=39'483 lcod 0'0 active pruub 128.497070312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=10.574017525s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 128.497070312s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=67 pruub=15.535473824s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=32'39 lcod 0'0 active pruub 137.257034302s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=43/45 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=67 pruub=15.535402298s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 137.257034302s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:17 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:17 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:17 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=66/67 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=56'487 lcod 55'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=66/67 n=7 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=55'485 lcod 53'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 67 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=56'485 lcod 53'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=66/67 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=56'489 lcod 54'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=54'485 lcod 53'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[47,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:17 np0005605476 lvm[99781]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:23:17 np0005605476 lvm[99781]: VG ceph_vg0 finished
Feb  2 12:23:17 np0005605476 lvm[99784]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:23:17 np0005605476 lvm[99784]: VG ceph_vg1 finished
Feb  2 12:23:17 np0005605476 lvm[99786]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:23:17 np0005605476 lvm[99786]: VG ceph_vg2 finished
Feb  2 12:23:18 np0005605476 trusting_shaw[99703]: {}
Feb  2 12:23:18 np0005605476 systemd[1]: libpod-01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61.scope: Deactivated successfully.
Feb  2 12:23:18 np0005605476 systemd[1]: libpod-01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61.scope: Consumed 1.028s CPU time.
Feb  2 12:23:18 np0005605476 podman[99686]: 2026-02-02 17:23:18.128386278 +0000 UTC m=+0.836644390 container died 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:23:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6dbc458c67f8602c54e3116d330397ecafef3ce69728535de6d620f65719bf07-merged.mount: Deactivated successfully.
Feb  2 12:23:18 np0005605476 podman[99686]: 2026-02-02 17:23:18.180152957 +0000 UTC m=+0.888411069 container remove 01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shaw, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:23:18 np0005605476 systemd[1]: libpod-conmon-01fefb4aa2482b815e0b8a089d086cb02e20b39db62f60bc298cf4a54f54ff61.scope: Deactivated successfully.
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.990222931s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 active pruub 133.930953979s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 pct=0'0 crt=56'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=56'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 pct=0'0 crt=55'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=55'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 pct=0'0 crt=56'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=56'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987949371s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=56'487 lcod 55'486 active pruub 137.727706909s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987977982s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=56'485 lcod 53'484 active pruub 137.727752686s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987934113s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=39'483 active pruub 137.727737427s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987923622s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=56'485 lcod 53'484 unknown NOTIFY pruub 137.727752686s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987854004s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=56'487 lcod 55'486 unknown NOTIFY pruub 137.727706909s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987734795s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=55'485 lcod 53'484 active pruub 137.727722168s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987691879s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=55'485 lcod 53'484 unknown NOTIFY pruub 137.727722168s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=14.987875938s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=39'483 unknown NOTIFY pruub 137.727737427s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=54'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=54'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.990005493s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.930953979s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.989615440s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=56'489 lcod 54'488 active pruub 133.930892944s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.989433289s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=56'489 lcod 54'488 unknown NOTIFY pruub 133.930892944s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.988899231s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 active pruub 133.930847168s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.988837242s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.930847168s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.988713264s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=54'485 lcod 53'484 active pruub 133.930953979s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 68 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=66/67 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68 pruub=14.987790108s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=54'485 lcod 53'484 unknown NOTIFY pruub 133.930953979s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 pct=0'0 crt=56'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=0/0 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=56'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 68 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=45/45/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  2 12:23:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 12:23:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.f( v 55'485 (0'0,55'485] local-lis/les=68/69 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=55'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.7( v 56'487 (0'0,56'487] local-lis/les=68/69 n=7 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=56'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.e( v 56'489 (0'0,56'489] local-lis/les=68/69 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=56'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.17( v 56'485 (0'0,56'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=66/54 les/c/f=67/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=56'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 69 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=8.619649887s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=32'39 lcod 0'0 active pruub 128.602676392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:19 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 69 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=51/52 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=8.619621277s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 128.602676392s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=54'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=66/47 les/c/f=67/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:19 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 69 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=68/69 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[47,68)/1 crt=56'487 lcod 55'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 69 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[47,68)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb  2 12:23:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb  2 12:23:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb  2 12:23:20 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 12:23:20 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 12:23:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb  2 12:23:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb  2 12:23:20 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 70 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:20 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 70 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:20 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 70 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 pct=0'0 crt=56'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:20 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 70 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=56'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:20 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 70 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.016522408s) [2] async=[2] r=-1 lpr=70 pi=[47,70)/1 crt=39'483 lcod 0'0 active pruub 136.003417969s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:20 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 70 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.016416550s) [2] r=-1 lpr=70 pi=[47,70)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 136.003417969s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:20 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 70 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.014738083s) [2] async=[2] r=-1 lpr=70 pi=[47,70)/1 crt=56'487 lcod 55'486 active pruub 136.002944946s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:20 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 70 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.014605522s) [2] r=-1 lpr=70 pi=[47,70)/1 crt=56'487 lcod 55'486 unknown NOTIFY pruub 136.002944946s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:20 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 70 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 2 remapped+peering, 9 peering, 294 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 498 B/s, 11 objects/s recovering
Feb  2 12:23:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb  2 12:23:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb  2 12:23:21 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=70/71 n=7 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:21 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 71 pg[9.18( v 56'487 (0'0,56'487] local-lis/les=70/71 n=6 ec=47/33 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=56'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb  2 12:23:21 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb  2 12:23:21 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb  2 12:23:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 2 remapped+peering, 9 peering, 294 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 478 B/s, 11 objects/s recovering
Feb  2 12:23:23 np0005605476 systemd[1]: session-33.scope: Deactivated successfully.
Feb  2 12:23:23 np0005605476 systemd[1]: session-33.scope: Consumed 7.817s CPU time.
Feb  2 12:23:23 np0005605476 systemd-logind[799]: Session 33 logged out. Waiting for processes to exit.
Feb  2 12:23:23 np0005605476 systemd-logind[799]: Removed session 33.
Feb  2 12:23:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:24 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb  2 12:23:24 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb  2 12:23:24 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb  2 12:23:24 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb  2 12:23:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 420 B/s, 10 objects/s recovering
Feb  2 12:23:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 1 objects/s recovering
Feb  2 12:23:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  2 12:23:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 12:23:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  2 12:23:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 12:23:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb  2 12:23:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb  2 12:23:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 12:23:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  2 12:23:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 12:23:28 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 72 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=13.418828964s) [0] r=-1 lpr=72 pi=[57,72)/1 crt=32'39 lcod 0'0 active pruub 142.649597168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:28 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 72 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=13.418759346s) [0] r=-1 lpr=72 pi=[57,72)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 142.649597168s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:28 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=72) [0] r=0 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb  2 12:23:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb  2 12:23:29 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 73 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=15.600839615s) [1] r=-1 lpr=73 pi=[59,73)/1 crt=32'39 active pruub 149.492721558s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:29 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 73 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=15.600792885s) [1] r=-1 lpr=73 pi=[59,73)/1 crt=32'39 unknown NOTIFY pruub 149.492721558s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:29 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 73 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=72) [0] r=0 lpr=72 pi=[57,72)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:29 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 73 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=73) [1] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:30 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb  2 12:23:30 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb  2 12:23:30 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb  2 12:23:30 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb  2 12:23:30 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 74 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=73/74 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=73) [1] r=0 lpr=73 pi=[59,73)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  2 12:23:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb  2 12:23:31 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 75 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=12.389830589s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=39'483 lcod 0'0 active pruub 144.497451782s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:31 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 75 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=12.389776230s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 144.497451782s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:31 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 75 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=12.385886192s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=55'486 lcod 55'486 active pruub 144.493774414s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:31 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 75 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=12.385817528s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=55'486 lcod 55'486 unknown NOTIFY pruub 144.493774414s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 12:23:31 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 12:23:31 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:31 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb  2 12:23:32 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb  2 12:23:32 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb  2 12:23:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb  2 12:23:32 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 76 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=43/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=11.639615059s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=32'39 active pruub 148.558517456s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:32 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 76 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=43/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=11.639500618s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=32'39 unknown NOTIFY pruub 148.558517456s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 12:23:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 76 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:32 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:32 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:32 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 76 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 76 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=47/48 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=55'486 lcod 55'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 76 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:32 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 76 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=47/48 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:32 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb  2 12:23:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb  2 12:23:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb  2 12:23:33 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb  2 12:23:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 12:23:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 12:23:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb  2 12:23:33 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb  2 12:23:33 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb  2 12:23:33 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 77 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=76/77 n=1 ec=43/21 lis/c=62/62 les/c/f=63/63/0 sis=76) [1] r=0 lpr=76 pi=[62,76)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 77 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=76/77 n=7 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[47,76)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 77 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=76/77 n=6 ec=47/33 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[47,76)/1 crt=56'487 lcod 55'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 2 activating+remapped, 1 active+clean+scrubbing, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14/252 objects misplaced (5.556%); 17 B/s, 0 objects/s recovering
Feb  2 12:23:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb  2 12:23:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb  2 12:23:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 78 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 pct=0'0 crt=56'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 78 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=56'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb  2 12:23:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 78 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 78 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 78 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=76/77 n=6 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.684110641s) [2] async=[2] r=-1 lpr=78 pi=[47,78)/1 crt=56'487 lcod 55'486 active pruub 150.853012085s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 78 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=76/77 n=7 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.682112694s) [2] async=[2] r=-1 lpr=78 pi=[47,78)/1 crt=39'483 lcod 0'0 active pruub 150.851501465s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 78 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=76/77 n=7 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.682017326s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 150.851501465s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 78 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=76/77 n=6 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.683338165s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=56'487 lcod 55'486 unknown NOTIFY pruub 150.853012085s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb  2 12:23:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb  2 12:23:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb  2 12:23:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb  2 12:23:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb  2 12:23:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb  2 12:23:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb  2 12:23:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 79 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=78/79 n=6 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=56'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 79 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=78/79 n=7 ec=47/33 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:35 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb  2 12:23:35 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb  2 12:23:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:23:36
Feb  2 12:23:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:23:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Feb  2 12:23:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 2 activating+remapped, 1 active+clean+scrubbing, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 14/252 objects misplaced (5.556%); 17 B/s, 0 objects/s recovering
Feb  2 12:23:36 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb  2 12:23:36 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb  2 12:23:37 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb  2 12:23:37 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:23:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:23:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb  2 12:23:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb  2 12:23:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 2 activating+remapped, 1 active+clean+scrubbing, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 14/252 objects misplaced (5.556%); 11 B/s, 0 objects/s recovering
Feb  2 12:23:38 np0005605476 systemd-logind[799]: New session 34 of user zuul.
Feb  2 12:23:38 np0005605476 systemd[1]: Started Session 34 of User zuul.
Feb  2 12:23:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:39 np0005605476 python3.9[100027]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 12:23:39 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb  2 12:23:39 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb  2 12:23:40 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb  2 12:23:40 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb  2 12:23:40 np0005605476 python3.9[100201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:23:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 2 objects/s recovering
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 12:23:40 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 12:23:41 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb  2 12:23:41 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb  2 12:23:41 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb  2 12:23:41 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb  2 12:23:41 np0005605476 python3.9[100357]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:23:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 12:23:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 12:23:42 np0005605476 python3.9[100510]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:23:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb  2 12:23:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb  2 12:23:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb  2 12:23:42 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=10.441261292s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=32'39 active pruub 157.491958618s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 12:23:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 12:23:42 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=10.440784454s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=32'39 unknown NOTIFY pruub 157.491958618s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:42 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:43 np0005605476 python3.9[100664]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:23:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb  2 12:23:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb  2 12:23:43 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb  2 12:23:43 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb  2 12:23:43 np0005605476 python3.9[100816]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:23:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb  2 12:23:43 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 12:23:43 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 12:23:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb  2 12:23:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb  2 12:23:43 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 82 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:23:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:44 np0005605476 python3.9[100966]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:23:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb  2 12:23:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb  2 12:23:44 np0005605476 network[100983]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:23:44 np0005605476 network[100984]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:23:44 np0005605476 network[100985]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:23:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Feb  2 12:23:44 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb  2 12:23:44 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb  2 12:23:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:46 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb  2 12:23:46 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.673370225204266e-06 of space, bias 4.0, pg target 0.0020080442702451193 quantized to 16 (current 16)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:23:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:23:47 np0005605476 python3.9[101245]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:23:47 np0005605476 python3.9[101395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:23:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb  2 12:23:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb  2 12:23:48 np0005605476 python3.9[101549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:23:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:49 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb  2 12:23:49 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb  2 12:23:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb  2 12:23:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb  2 12:23:49 np0005605476 python3.9[101707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:23:50 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb  2 12:23:50 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb  2 12:23:50 np0005605476 python3.9[101791]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:23:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 0 objects/s recovering
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb  2 12:23:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb  2 12:23:51 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb  2 12:23:51 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb  2 12:23:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb  2 12:23:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb  2 12:23:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 12:23:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 0 objects/s recovering
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb  2 12:23:52 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb  2 12:23:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 0 objects/s recovering
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb  2 12:23:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb  2 12:23:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 12:23:56 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb  2 12:23:56 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb  2 12:23:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb  2 12:23:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb  2 12:23:57 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb  2 12:23:57 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb  2 12:23:57 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 12:23:58 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb  2 12:23:58 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb  2 12:23:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb  2 12:23:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb  2 12:23:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 86 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=86 pruub=13.039529800s) [2] r=-1 lpr=86 pi=[55,86)/1 crt=54'485 active pruub 176.250030518s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 87 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=86 pruub=13.039443970s) [2] r=-1 lpr=86 pi=[55,86)/1 crt=54'485 unknown NOTIFY pruub 176.250030518s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:59 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=86) [2] r=0 lpr=87 pi=[55,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:23:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb  2 12:23:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb  2 12:23:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb  2 12:23:59 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[55,88)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:59 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[55,88)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 88 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=88) [2]/[0] r=0 lpr=88 pi=[55,88)/1 crt=54'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 88 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=88) [2]/[0] r=0 lpr=88 pi=[55,88)/1 crt=54'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:23:59 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb  2 12:23:59 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb  2 12:23:59 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb  2 12:24:00 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 89 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=88/89 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[55,88)/1 crt=54'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb  2 12:24:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  2 12:24:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb  2 12:24:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 12:24:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb  2 12:24:01 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  2 12:24:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 90 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=90 pruub=9.979783058s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=39'483 active pruub 175.249160767s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 90 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=90 pruub=9.979722977s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=39'483 unknown NOTIFY pruub 175.249160767s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 90 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=88/89 n=6 ec=47/33 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=14.988298416s) [2] async=[2] r=-1 lpr=90 pi=[55,90)/1 crt=54'485 active pruub 180.258178711s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:01 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 90 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=88/89 n=6 ec=47/33 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=14.988186836s) [2] r=-1 lpr=90 pi=[55,90)/1 crt=54'485 unknown NOTIFY pruub 180.258178711s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb  2 12:24:01 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:01 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 90 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=88/55 les/c/f=89/56/0 sis=90) [2] r=0 lpr=90 pi=[55,90)/1 pct=0'0 crt=54'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:01 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 90 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=88/55 les/c/f=89/56/0 sis=90) [2] r=0 lpr=90 pi=[55,90)/1 crt=54'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb  2 12:24:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb  2 12:24:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=-1 lpr=91 pi=[54,91)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:02 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=-1 lpr=91 pi=[54,91)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb  2 12:24:02 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 91 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=0 lpr=91 pi=[54,91)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:02 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 91 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=0 lpr=91 pi=[54,91)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:02 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 91 pg[9.13( v 54'485 (0'0,54'485] local-lis/les=90/91 n=6 ec=47/33 lis/c=88/55 les/c/f=89/56/0 sis=90) [2] r=0 lpr=90 pi=[55,90)/1 crt=54'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:02 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb  2 12:24:02 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb  2 12:24:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 1 objects/s recovering
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb  2 12:24:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  2 12:24:03 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb  2 12:24:03 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb  2 12:24:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb  2 12:24:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 12:24:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb  2 12:24:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  2 12:24:03 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 92 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=92 pruub=12.478935242s) [0] r=-1 lpr=92 pi=[68,92)/1 crt=39'483 active pruub 172.302703857s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:03 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 92 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=92 pruub=12.478890419s) [0] r=-1 lpr=92 pi=[68,92)/1 crt=39'483 unknown NOTIFY pruub 172.302703857s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb  2 12:24:03 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [0] r=0 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:03 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb  2 12:24:03 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 92 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=91/92 n=6 ec=47/33 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] async=[1] r=0 lpr=91 pi=[54,91)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:03 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb  2 12:24:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb  2 12:24:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb  2 12:24:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb  2 12:24:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb  2 12:24:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb  2 12:24:04 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 93 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=0 lpr=93 pi=[68,93)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:04 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 93 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=0 lpr=93 pi=[68,93)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=91/92 n=6 ec=47/33 lis/c=91/54 les/c/f=92/55/0 sis=93 pruub=15.361567497s) [1] async=[1] r=-1 lpr=93 pi=[54,93)/1 crt=39'483 active pruub 183.617172241s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:04 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=91/92 n=6 ec=47/33 lis/c=91/54 les/c/f=92/55/0 sis=93 pruub=15.361342430s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=39'483 unknown NOTIFY pruub 183.617172241s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:04 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 12:24:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb  2 12:24:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb  2 12:24:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb  2 12:24:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb  2 12:24:05 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 94 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=47/33 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:05 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 94 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[68,93)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb  2 12:24:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb  2 12:24:06 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb  2 12:24:06 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:06 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:06 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=47/33 lis/c=93/68 les/c/f=94/69/0 sis=95 pruub=14.935072899s) [0] async=[0] r=-1 lpr=95 pi=[68,95)/1 crt=39'483 active pruub 177.791183472s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:06 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=47/33 lis/c=93/68 les/c/f=94/69/0 sis=95 pruub=14.934980392s) [0] r=-1 lpr=95 pi=[68,95)/1 crt=39'483 unknown NOTIFY pruub 177.791183472s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:06 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb  2 12:24:06 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb  2 12:24:06 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb  2 12:24:06 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb  2 12:24:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb  2 12:24:07 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb  2 12:24:07 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb  2 12:24:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb  2 12:24:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb  2 12:24:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb  2 12:24:07 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 96 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=95/96 n=6 ec=47/33 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:07 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb  2 12:24:07 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:08 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb  2 12:24:08 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb  2 12:24:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Feb  2 12:24:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb  2 12:24:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb  2 12:24:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:09 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb  2 12:24:09 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb  2 12:24:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb  2 12:24:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb  2 12:24:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  2 12:24:10 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb  2 12:24:10 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb  2 12:24:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb  2 12:24:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 12:24:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb  2 12:24:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb  2 12:24:11 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  2 12:24:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb  2 12:24:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb  2 12:24:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 12:24:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb  2 12:24:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb  2 12:24:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Feb  2 12:24:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb  2 12:24:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  2 12:24:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb  2 12:24:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  2 12:24:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 12:24:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb  2 12:24:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb  2 12:24:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb  2 12:24:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb  2 12:24:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:14 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 12:24:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Feb  2 12:24:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb  2 12:24:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  2 12:24:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb  2 12:24:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 12:24:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb  2 12:24:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb  2 12:24:15 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 99 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=12.821564674s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=54'486 lcod 54'486 active pruub 192.250183105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:15 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 99 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=12.821508408s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=54'486 lcod 54'486 unknown NOTIFY pruub 192.250183105s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:15 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  2 12:24:15 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb  2 12:24:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 100 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=54'486 lcod 54'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:16 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 100 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=55/56 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=54'486 lcod 54'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb  2 12:24:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:16 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 12:24:16 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb  2 12:24:16 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb  2 12:24:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb  2 12:24:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb  2 12:24:17 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 101 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=100/101 n=6 ec=47/33 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[55,100)/1 crt=56'487 lcod 54'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  2 12:24:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb  2 12:24:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 102 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=100/101 n=6 ec=47/33 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.977605820s) [2] async=[2] r=-1 lpr=102 pi=[55,102)/1 crt=56'487 lcod 54'486 active pruub 197.456665039s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:18 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 102 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=100/101 n=6 ec=47/33 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.977510452s) [2] r=-1 lpr=102 pi=[55,102)/1 crt=56'487 lcod 54'486 unknown NOTIFY pruub 197.456665039s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 102 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 pct=0'0 crt=56'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:18 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 102 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=56'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb  2 12:24:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb  2 12:24:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:24:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.147153575 +0000 UTC m=+0.039039019 container create c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:24:19 np0005605476 systemd[1]: Started libpod-conmon-c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6.scope.
Feb  2 12:24:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.223246224 +0000 UTC m=+0.115131718 container init c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.132922504 +0000 UTC m=+0.024807968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.228880337 +0000 UTC m=+0.120765771 container start c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:24:19 np0005605476 heuristic_pascal[102169]: 167 167
Feb  2 12:24:19 np0005605476 systemd[1]: libpod-c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6.scope: Deactivated successfully.
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.233178071 +0000 UTC m=+0.125063565 container attach c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.233565172 +0000 UTC m=+0.125450626 container died c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:24:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-754947871ed43bfdc1cd6ee5cff01830891a1db333ad8a67126cfae86657f6d9-merged.mount: Deactivated successfully.
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:24:19 np0005605476 podman[102153]: 2026-02-02 17:24:19.276814712 +0000 UTC m=+0.168700166 container remove c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb  2 12:24:19 np0005605476 systemd[1]: libpod-conmon-c1fb60a36e0496b7ddcdb91616f919156fbd7ec023182a1a09f7847dfe405dd6.scope: Deactivated successfully.
Feb  2 12:24:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb  2 12:24:19 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 103 pg[9.19( v 56'487 (0'0,56'487] local-lis/les=102/103 n=6 ec=47/33 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=56'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:19 np0005605476 podman[102193]: 2026-02-02 17:24:19.415324474 +0000 UTC m=+0.053671642 container create 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:24:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb  2 12:24:19 np0005605476 systemd[1]: Started libpod-conmon-704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e.scope.
Feb  2 12:24:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb  2 12:24:19 np0005605476 podman[102193]: 2026-02-02 17:24:19.392575877 +0000 UTC m=+0.030923095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:19 np0005605476 podman[102193]: 2026-02-02 17:24:19.526374663 +0000 UTC m=+0.164721861 container init 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:24:19 np0005605476 podman[102193]: 2026-02-02 17:24:19.534793196 +0000 UTC m=+0.173140324 container start 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:24:19 np0005605476 podman[102193]: 2026-02-02 17:24:19.538801762 +0000 UTC m=+0.177148980 container attach 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:24:19 np0005605476 dazzling_visvesvaraya[102210]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:24:19 np0005605476 dazzling_visvesvaraya[102210]: --> All data devices are unavailable
Feb  2 12:24:20 np0005605476 systemd[1]: libpod-704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e.scope: Deactivated successfully.
Feb  2 12:24:20 np0005605476 podman[102193]: 2026-02-02 17:24:20.016359699 +0000 UTC m=+0.654706897 container died 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:24:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-349a9825b68a039318a9298174d21edc5e42e6349e62605cd29310d32e64c420-merged.mount: Deactivated successfully.
Feb  2 12:24:20 np0005605476 podman[102193]: 2026-02-02 17:24:20.05513744 +0000 UTC m=+0.693484558 container remove 704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:24:20 np0005605476 systemd[1]: libpod-conmon-704daf05f7040880a7b1edf6d7013facd101c81d2b57047adafb58aa614e124e.scope: Deactivated successfully.
Feb  2 12:24:20 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.449809944 +0000 UTC m=+0.036853706 container create 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:24:20 np0005605476 systemd[1]: Started libpod-conmon-01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192.scope.
Feb  2 12:24:20 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.528288861 +0000 UTC m=+0.115332633 container init 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.435541771 +0000 UTC m=+0.022585583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.532691008 +0000 UTC m=+0.119734770 container start 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.535986394 +0000 UTC m=+0.123030156 container attach 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:24:20 np0005605476 objective_stonebraker[102320]: 167 167
Feb  2 12:24:20 np0005605476 systemd[1]: libpod-01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192.scope: Deactivated successfully.
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.562347155 +0000 UTC m=+0.149390947 container died 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:24:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ba347273e8ffd8928f9174fdd6df519f241942a06625e2077fcd556e2b007a92-merged.mount: Deactivated successfully.
Feb  2 12:24:20 np0005605476 podman[102303]: 2026-02-02 17:24:20.602779924 +0000 UTC m=+0.189823696 container remove 01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:24:20 np0005605476 systemd[1]: libpod-conmon-01ec0d25dee4db8c63e3adc4a03b6102ea5337cf06f552d5eeb718e165ab2192.scope: Deactivated successfully.
Feb  2 12:24:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 2 objects/s recovering
Feb  2 12:24:20 np0005605476 podman[102345]: 2026-02-02 17:24:20.776698579 +0000 UTC m=+0.057544214 container create ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:24:20 np0005605476 systemd[1]: Started libpod-conmon-ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f.scope.
Feb  2 12:24:20 np0005605476 podman[102345]: 2026-02-02 17:24:20.752585092 +0000 UTC m=+0.033430767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:20 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44794f14120302f2e1b1a2eb5d272b36436ad86bb95ee4a0e09d40d67092a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44794f14120302f2e1b1a2eb5d272b36436ad86bb95ee4a0e09d40d67092a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44794f14120302f2e1b1a2eb5d272b36436ad86bb95ee4a0e09d40d67092a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:20 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44794f14120302f2e1b1a2eb5d272b36436ad86bb95ee4a0e09d40d67092a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:20 np0005605476 podman[102345]: 2026-02-02 17:24:20.895754999 +0000 UTC m=+0.176600654 container init ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:24:20 np0005605476 podman[102345]: 2026-02-02 17:24:20.902653558 +0000 UTC m=+0.183499193 container start ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:24:20 np0005605476 podman[102345]: 2026-02-02 17:24:20.906504269 +0000 UTC m=+0.187349924 container attach ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:24:21 np0005605476 nifty_curie[102361]: {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    "0": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "devices": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "/dev/loop3"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            ],
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_name": "ceph_lv0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_size": "21470642176",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "name": "ceph_lv0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "tags": {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_name": "ceph",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.crush_device_class": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.encrypted": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.objectstore": "bluestore",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_id": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.vdo": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.with_tpm": "0"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            },
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "vg_name": "ceph_vg0"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        }
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    ],
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    "1": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "devices": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "/dev/loop4"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            ],
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_name": "ceph_lv1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_size": "21470642176",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "name": "ceph_lv1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "tags": {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_name": "ceph",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.crush_device_class": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.encrypted": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.objectstore": "bluestore",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_id": "1",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.vdo": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.with_tpm": "0"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            },
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "vg_name": "ceph_vg1"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        }
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    ],
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    "2": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "devices": [
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "/dev/loop5"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            ],
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_name": "ceph_lv2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_size": "21470642176",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "name": "ceph_lv2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "tags": {
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.cluster_name": "ceph",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.crush_device_class": "",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.encrypted": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.objectstore": "bluestore",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osd_id": "2",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.vdo": "0",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:                "ceph.with_tpm": "0"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            },
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "type": "block",
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:            "vg_name": "ceph_vg2"
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:        }
Feb  2 12:24:21 np0005605476 nifty_curie[102361]:    ]
Feb  2 12:24:21 np0005605476 nifty_curie[102361]: }
Feb  2 12:24:21 np0005605476 systemd[1]: libpod-ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f.scope: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102345]: 2026-02-02 17:24:21.197143847 +0000 UTC m=+0.477989482 container died ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:24:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e44794f14120302f2e1b1a2eb5d272b36436ad86bb95ee4a0e09d40d67092a3b-merged.mount: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102345]: 2026-02-02 17:24:21.23392446 +0000 UTC m=+0.514770095 container remove ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_curie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:24:21 np0005605476 systemd[1]: libpod-conmon-ca3c5104f2025c19e2d150fab46ca0f5af6d74907b5a08053ba3ab2fd47dad1f.scope: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.646924533 +0000 UTC m=+0.044226419 container create 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:24:21 np0005605476 systemd[1]: Started libpod-conmon-89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e.scope.
Feb  2 12:24:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.704490327 +0000 UTC m=+0.101792233 container init 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:24:21 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.711152929 +0000 UTC m=+0.108454825 container start 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.714092844 +0000 UTC m=+0.111394730 container attach 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:24:21 np0005605476 sweet_greider[102459]: 167 167
Feb  2 12:24:21 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb  2 12:24:21 np0005605476 systemd[1]: libpod-89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e.scope: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.716928546 +0000 UTC m=+0.114230432 container died 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.629465049 +0000 UTC m=+0.026767035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3571cb7a0bb3c488f5ea1a9b899c6b3fe74a2910a67ff8fa62afa80a71c4c342-merged.mount: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102443]: 2026-02-02 17:24:21.744588055 +0000 UTC m=+0.141889941 container remove 89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:24:21 np0005605476 systemd[1]: libpod-conmon-89e1fa4e65a02e74e61f6c97684e562531f0ce1382113ce9da3213f78901d25e.scope: Deactivated successfully.
Feb  2 12:24:21 np0005605476 podman[102483]: 2026-02-02 17:24:21.880826482 +0000 UTC m=+0.042607072 container create 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:24:21 np0005605476 systemd[1]: Started libpod-conmon-3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4.scope.
Feb  2 12:24:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:24:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc925a8d1fb55aceacdb9466a85f24315b30c572085221b32b3ed842ccfd9bb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc925a8d1fb55aceacdb9466a85f24315b30c572085221b32b3ed842ccfd9bb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc925a8d1fb55aceacdb9466a85f24315b30c572085221b32b3ed842ccfd9bb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc925a8d1fb55aceacdb9466a85f24315b30c572085221b32b3ed842ccfd9bb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:24:21 np0005605476 podman[102483]: 2026-02-02 17:24:21.860965018 +0000 UTC m=+0.022745608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:24:21 np0005605476 podman[102483]: 2026-02-02 17:24:21.966488567 +0000 UTC m=+0.128269137 container init 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:24:21 np0005605476 podman[102483]: 2026-02-02 17:24:21.971204593 +0000 UTC m=+0.132985143 container start 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:24:21 np0005605476 podman[102483]: 2026-02-02 17:24:21.974339744 +0000 UTC m=+0.136120314 container attach 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:24:22 np0005605476 lvm[102575]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:24:22 np0005605476 lvm[102575]: VG ceph_vg0 finished
Feb  2 12:24:22 np0005605476 lvm[102578]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:24:22 np0005605476 lvm[102578]: VG ceph_vg1 finished
Feb  2 12:24:22 np0005605476 lvm[102580]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:24:22 np0005605476 lvm[102580]: VG ceph_vg2 finished
Feb  2 12:24:22 np0005605476 lvm[102581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:24:22 np0005605476 lvm[102581]: VG ceph_vg0 finished
Feb  2 12:24:22 np0005605476 adoring_yonath[102499]: {}
Feb  2 12:24:22 np0005605476 systemd[1]: libpod-3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4.scope: Deactivated successfully.
Feb  2 12:24:22 np0005605476 podman[102483]: 2026-02-02 17:24:22.712888644 +0000 UTC m=+0.874669204 container died 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:24:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb  2 12:24:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-cc925a8d1fb55aceacdb9466a85f24315b30c572085221b32b3ed842ccfd9bb3-merged.mount: Deactivated successfully.
Feb  2 12:24:22 np0005605476 podman[102483]: 2026-02-02 17:24:22.754437224 +0000 UTC m=+0.916217784 container remove 3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:24:22 np0005605476 systemd[1]: libpod-conmon-3b016b1da49280d5227bec1895bac43221aaa2316b017e39333b3c88ee2f89a4.scope: Deactivated successfully.
Feb  2 12:24:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:24:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:24:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:23 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb  2 12:24:23 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb  2 12:24:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:24:23 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb  2 12:24:23 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb  2 12:24:24 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 104 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=78/79 n=6 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=14.953407288s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=56'487 active pruub 196.496643066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:24 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 104 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=78/79 n=6 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=14.953315735s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=56'487 unknown NOTIFY pruub 196.496643066s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:24 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:24 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  2 12:24:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb  2 12:24:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb  2 12:24:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb  2 12:24:25 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 12:24:25 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:25 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:25 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 105 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=78/79 n=6 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=56'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:25 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 105 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=78/79 n=6 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=56'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb  2 12:24:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb  2 12:24:26 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb  2 12:24:26 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 106 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=105/106 n=6 ec=47/33 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=56'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb  2 12:24:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb  2 12:24:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb  2 12:24:27 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 107 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 pct=0'0 crt=56'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:27 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 107 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=0/0 n=6 ec=47/33 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=56'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:27 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 107 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=105/106 n=6 ec=47/33 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.002693176s) [0] async=[0] r=-1 lpr=107 pi=[78,107)/1 crt=56'487 active pruub 199.589767456s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:27 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 107 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=105/106 n=6 ec=47/33 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.002160072s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=56'487 unknown NOTIFY pruub 199.589767456s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb  2 12:24:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb  2 12:24:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb  2 12:24:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb  2 12:24:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb  2 12:24:28 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 108 pg[9.1c( v 56'487 (0'0,56'487] local-lis/les=107/108 n=6 ec=47/33 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=56'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:29 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb  2 12:24:29 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb  2 12:24:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 418 B/s wr, 12 op/s; 170 B/s, 3 objects/s recovering
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb  2 12:24:30 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  2 12:24:30 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb  2 12:24:30 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb  2 12:24:31 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb  2 12:24:31 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb  2 12:24:31 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 12:24:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb  2 12:24:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb  2 12:24:32 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb  2 12:24:32 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb  2 12:24:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 348 B/s wr, 10 op/s; 142 B/s, 3 objects/s recovering
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb  2 12:24:32 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  2 12:24:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb  2 12:24:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 110 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=14.142454147s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=54'485 active pruub 204.303268433s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 110 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=14.142419815s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=54'485 unknown NOTIFY pruub 204.303268433s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:33 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=110) [0] r=0 lpr=110 pi=[68,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb  2 12:24:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb  2 12:24:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 12:24:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb  2 12:24:33 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 111 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=54'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:33 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 111 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=54'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:33 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:33 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 174 B/s wr, 9 op/s; 142 B/s, 3 objects/s recovering
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:24:34 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb  2 12:24:34 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb  2 12:24:34 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 12:24:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 112 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=112 pruub=12.635749817s) [1] r=-1 lpr=112 pi=[68,112)/1 crt=39'483 active pruub 204.301544189s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:34 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 112 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=112 pruub=12.635684013s) [1] r=-1 lpr=112 pi=[68,112)/1 crt=39'483 unknown NOTIFY pruub 204.301544189s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:34 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [1] r=0 lpr=112 pi=[68,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 112 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=111/112 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[68,111)/1 crt=54'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb  2 12:24:35 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 12:24:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb  2 12:24:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb  2 12:24:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 113 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=0 lpr=113 pi=[68,113)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 113 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=0 lpr=113 pi=[68,113)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:35 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:35 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[68,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 113 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=111/112 n=6 ec=47/33 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=15.457698822s) [0] async=[0] r=-1 lpr=113 pi=[68,113)/1 crt=54'485 active pruub 208.143264771s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:35 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 113 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=111/112 n=6 ec=47/33 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=15.457576752s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'485 unknown NOTIFY pruub 208.143264771s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:35 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 113 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 pct=0'0 crt=54'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:35 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 113 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=0/0 n=6 ec=47/33 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=54'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:24:36
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root']
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:24:36 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb  2 12:24:36 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb  2 12:24:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb  2 12:24:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb  2 12:24:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb  2 12:24:36 np0005605476 ceph-osd[85696]: osd.0 pg_epoch: 114 pg[9.1e( v 54'485 (0'0,54'485] local-lis/les=113/114 n=6 ec=47/33 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=54'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:37 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 114 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=47/33 lis/c=68/68 les/c/f=69/69/0 sis=113) [1]/[2] async=[1] r=0 lpr=113 pi=[68,113)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:24:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:24:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb  2 12:24:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb  2 12:24:37 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=47/33 lis/c=113/68 les/c/f=114/69/0 sis=115 pruub=15.044556618s) [1] async=[1] r=-1 lpr=115 pi=[68,115)/1 crt=39'483 active pruub 209.762069702s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:37 np0005605476 ceph-osd[87792]: osd.2 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=47/33 lis/c=113/68 les/c/f=114/69/0 sis=115 pruub=15.044447899s) [1] r=-1 lpr=115 pi=[68,115)/1 crt=39'483 unknown NOTIFY pruub 209.762069702s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 12:24:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb  2 12:24:37 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 12:24:37 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=47/33 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 12:24:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb  2 12:24:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb  2 12:24:38 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb  2 12:24:38 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb  2 12:24:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb  2 12:24:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb  2 12:24:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb  2 12:24:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb  2 12:24:38 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb  2 12:24:39 np0005605476 ceph-osd[86737]: osd.1 pg_epoch: 116 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=115/116 n=6 ec=47/33 lis/c=113/68 les/c/f=114/69/0 sis=115) [1] r=0 lpr=115 pi=[68,115)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 12:24:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Feb  2 12:24:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb  2 12:24:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb  2 12:24:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Feb  2 12:24:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb  2 12:24:44 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb  2 12:24:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Feb  2 12:24:45 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb  2 12:24:45 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb  2 12:24:45 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb  2 12:24:45 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb  2 12:24:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:24:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:24:47 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb  2 12:24:47 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb  2 12:24:48 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb  2 12:24:48 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb  2 12:24:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb  2 12:24:48 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb  2 12:24:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Feb  2 12:24:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb  2 12:24:49 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb  2 12:24:50 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb  2 12:24:50 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb  2 12:24:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Feb  2 12:24:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb  2 12:24:51 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb  2 12:24:51 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb  2 12:24:51 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb  2 12:24:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:52 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb  2 12:24:52 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb  2 12:24:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:24:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:57 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb  2 12:24:57 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb  2 12:24:57 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb  2 12:24:57 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb  2 12:24:58 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb  2 12:24:58 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb  2 12:24:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:24:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:00 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb  2 12:25:00 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb  2 12:25:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb  2 12:25:01 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb  2 12:25:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb  2 12:25:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb  2 12:25:01 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb  2 12:25:01 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb  2 12:25:02 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb  2 12:25:02 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb  2 12:25:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:04 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb  2 12:25:04 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb  2 12:25:04 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb  2 12:25:04 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb  2 12:25:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:05 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb  2 12:25:05 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb  2 12:25:06 np0005605476 python3.9[102925]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:25:06 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb  2 12:25:06 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb  2 12:25:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:07 np0005605476 python3.9[103212]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 12:25:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb  2 12:25:08 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb  2 12:25:08 np0005605476 python3.9[103364]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 12:25:08 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb  2 12:25:08 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb  2 12:25:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:09 np0005605476 python3.9[103516]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:25:09 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb  2 12:25:09 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb  2 12:25:09 np0005605476 python3.9[103668]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 12:25:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb  2 12:25:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb  2 12:25:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:10 np0005605476 python3.9[103820]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:25:11 np0005605476 python3.9[103972]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:25:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb  2 12:25:11 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb  2 12:25:11 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb  2 12:25:11 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb  2 12:25:11 np0005605476 python3.9[104050]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:25:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb  2 12:25:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb  2 12:25:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:12 np0005605476 python3.9[104202]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:25:13 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb  2 12:25:13 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb  2 12:25:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb  2 12:25:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb  2 12:25:13 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb  2 12:25:13 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb  2 12:25:13 np0005605476 python3.9[104356]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 12:25:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:14 np0005605476 python3.9[104509]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 12:25:14 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb  2 12:25:14 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb  2 12:25:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:15 np0005605476 python3.9[104662]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:25:15 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb  2 12:25:15 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb  2 12:25:15 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb  2 12:25:15 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb  2 12:25:16 np0005605476 python3.9[104814]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 12:25:16 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb  2 12:25:16 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb  2 12:25:16 np0005605476 python3.9[104966]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:17 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb  2 12:25:17 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb  2 12:25:18 np0005605476 python3.9[105119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:25:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:19 np0005605476 python3.9[105271]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:25:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:19 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb  2 12:25:19 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb  2 12:25:19 np0005605476 python3.9[105349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:25:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb  2 12:25:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb  2 12:25:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb  2 12:25:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb  2 12:25:20 np0005605476 python3.9[105501]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:25:20 np0005605476 python3.9[105580]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:25:20 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb  2 12:25:20 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb  2 12:25:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:21 np0005605476 python3.9[105732]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:22 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb  2 12:25:22 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb  2 12:25:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:23 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb  2 12:25:23 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb  2 12:25:23 np0005605476 python3.9[105933]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.697343558 +0000 UTC m=+0.041225029 container create 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:25:23 np0005605476 systemd[1]: Started libpod-conmon-4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23.scope.
Feb  2 12:25:23 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb  2 12:25:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:23 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.764480342 +0000 UTC m=+0.108361843 container init 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.769011955 +0000 UTC m=+0.112893426 container start 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.771976428 +0000 UTC m=+0.115857989 container attach 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:25:23 np0005605476 peaceful_darwin[106195]: 167 167
Feb  2 12:25:23 np0005605476 systemd[1]: libpod-4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23.scope: Deactivated successfully.
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.77490003 +0000 UTC m=+0.118781511 container died 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.682133269 +0000 UTC m=+0.026014760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay-904b9952aea256406a943bf3ea9304a9e92a9bc477460079f5714ce7bc003094-merged.mount: Deactivated successfully.
Feb  2 12:25:23 np0005605476 podman[106136]: 2026-02-02 17:25:23.811870274 +0000 UTC m=+0.155751755 container remove 4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:25:23 np0005605476 systemd[1]: libpod-conmon-4b82bf0d30db5803536e389bd04219d9026e549e87ff14f97144433b341d1b23.scope: Deactivated successfully.
Feb  2 12:25:23 np0005605476 python3.9[106192]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 12:25:23 np0005605476 podman[106220]: 2026-02-02 17:25:23.925660977 +0000 UTC m=+0.033126314 container create f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 12:25:23 np0005605476 systemd[1]: Started libpod-conmon-f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2.scope.
Feb  2 12:25:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:24 np0005605476 podman[106220]: 2026-02-02 17:25:23.910002504 +0000 UTC m=+0.017467881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:24 np0005605476 podman[106220]: 2026-02-02 17:25:24.007764153 +0000 UTC m=+0.115229510 container init f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:25:24 np0005605476 podman[106220]: 2026-02-02 17:25:24.014935238 +0000 UTC m=+0.122400585 container start f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:25:24 np0005605476 podman[106220]: 2026-02-02 17:25:24.018610594 +0000 UTC m=+0.126075941 container attach f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:25:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:24 np0005605476 python3.9[106395]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:25:24 np0005605476 blissful_roentgen[106261]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:25:24 np0005605476 blissful_roentgen[106261]: --> All data devices are unavailable
Feb  2 12:25:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:24 np0005605476 systemd[1]: libpod-f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2.scope: Deactivated successfully.
Feb  2 12:25:24 np0005605476 podman[106416]: 2026-02-02 17:25:24.789197308 +0000 UTC m=+0.024151651 container died f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:25:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-04928a12163a3406ec80d936665eb89070981777464fbf1285479bbb531f822a-merged.mount: Deactivated successfully.
Feb  2 12:25:24 np0005605476 podman[106416]: 2026-02-02 17:25:24.826994758 +0000 UTC m=+0.061949071 container remove f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 12:25:24 np0005605476 systemd[1]: libpod-conmon-f8ca586f28359eccae7092ae879d70edfd7f74a0c3d6b3b182996cf6a9cbbff2.scope: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.206446457 +0000 UTC m=+0.050252844 container create ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:25:25 np0005605476 systemd[1]: Started libpod-conmon-ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48.scope.
Feb  2 12:25:25 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.273707505 +0000 UTC m=+0.117513912 container init ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.183243366 +0000 UTC m=+0.027049783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.280691035 +0000 UTC m=+0.124497422 container start ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:25:25 np0005605476 determined_blackwell[106577]: 167 167
Feb  2 12:25:25 np0005605476 systemd[1]: libpod-ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48.scope: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.284779503 +0000 UTC m=+0.128585940 container attach ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.2853038 +0000 UTC m=+0.129110187 container died ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:25:25 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8ad2ec9e7aeb76abde15eeeb828146498df053ed39823be77017ade73c1bcb10-merged.mount: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106561]: 2026-02-02 17:25:25.314995525 +0000 UTC m=+0.158801902 container remove ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:25:25 np0005605476 systemd[1]: libpod-conmon-ce52e696acaf73c8c364ec7ee57f143d90c4e2f9409cccc658e4fee5c0898a48.scope: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.441161018 +0000 UTC m=+0.050958766 container create b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:25:25 np0005605476 systemd[1]: Started libpod-conmon-b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57.scope.
Feb  2 12:25:25 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:25 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c1457e70dab6882bba1c526d9d5c3ea7a44982d28720204ee087c0b229f21b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:25 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c1457e70dab6882bba1c526d9d5c3ea7a44982d28720204ee087c0b229f21b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:25 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c1457e70dab6882bba1c526d9d5c3ea7a44982d28720204ee087c0b229f21b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:25 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c1457e70dab6882bba1c526d9d5c3ea7a44982d28720204ee087c0b229f21b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.421580191 +0000 UTC m=+0.031377919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.520852977 +0000 UTC m=+0.130650725 container init b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.529646944 +0000 UTC m=+0.139444652 container start b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.533103383 +0000 UTC m=+0.142901131 container attach b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:25:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb  2 12:25:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]: {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    "0": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "devices": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "/dev/loop3"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            ],
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_name": "ceph_lv0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_size": "21470642176",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "name": "ceph_lv0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "tags": {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_name": "ceph",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.crush_device_class": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.encrypted": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.objectstore": "bluestore",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_id": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.vdo": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.with_tpm": "0"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            },
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "vg_name": "ceph_vg0"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        }
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    ],
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    "1": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "devices": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "/dev/loop4"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            ],
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_name": "ceph_lv1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_size": "21470642176",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "name": "ceph_lv1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "tags": {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_name": "ceph",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.crush_device_class": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.encrypted": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.objectstore": "bluestore",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_id": "1",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.vdo": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.with_tpm": "0"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            },
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "vg_name": "ceph_vg1"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        }
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    ],
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    "2": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "devices": [
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "/dev/loop5"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            ],
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_name": "ceph_lv2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_size": "21470642176",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "name": "ceph_lv2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "tags": {
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.cluster_name": "ceph",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.crush_device_class": "",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.encrypted": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.objectstore": "bluestore",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osd_id": "2",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.vdo": "0",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:                "ceph.with_tpm": "0"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            },
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "type": "block",
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:            "vg_name": "ceph_vg2"
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:        }
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]:    ]
Feb  2 12:25:25 np0005605476 crazy_mclean[106641]: }
Feb  2 12:25:25 np0005605476 systemd[1]: libpod-b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57.scope: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.804014263 +0000 UTC m=+0.413811971 container died b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:25:25 np0005605476 systemd[1]: var-lib-containers-storage-overlay-30c1457e70dab6882bba1c526d9d5c3ea7a44982d28720204ee087c0b229f21b-merged.mount: Deactivated successfully.
Feb  2 12:25:25 np0005605476 podman[106602]: 2026-02-02 17:25:25.842171275 +0000 UTC m=+0.451968973 container remove b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:25:25 np0005605476 systemd[1]: libpod-conmon-b5c22b9c02b9e69d013e1952c9551a68fa0cd4e9644429d35f75834f8c3a9b57.scope: Deactivated successfully.
Feb  2 12:25:25 np0005605476 python3.9[106698]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:25:25 np0005605476 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 12:25:26 np0005605476 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 12:25:26 np0005605476 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 12:25:26 np0005605476 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.248849801 +0000 UTC m=+0.034757756 container create f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:25:26 np0005605476 systemd[1]: Started libpod-conmon-f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9.scope.
Feb  2 12:25:26 np0005605476 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 12:25:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.309245333 +0000 UTC m=+0.095153298 container init f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.314622542 +0000 UTC m=+0.100530497 container start f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.317462331 +0000 UTC m=+0.103370286 container attach f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:25:26 np0005605476 systemd[1]: libpod-f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9.scope: Deactivated successfully.
Feb  2 12:25:26 np0005605476 kind_wing[106806]: 167 167
Feb  2 12:25:26 np0005605476 conmon[106806]: conmon f7178cc04fb7f6b0828d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9.scope/container/memory.events
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.319861237 +0000 UTC m=+0.105769192 container died f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.231470834 +0000 UTC m=+0.017378789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b7f482ea8fee6c1f5e27b8dd4eab2c0cb2dc1ce2d298ca9edfa0611ce9122af6-merged.mount: Deactivated successfully.
Feb  2 12:25:26 np0005605476 podman[106789]: 2026-02-02 17:25:26.354535509 +0000 UTC m=+0.140443504 container remove f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:25:26 np0005605476 systemd[1]: libpod-conmon-f7178cc04fb7f6b0828d3e599c3660021ad5bc97ee9f242b82e6918d02b73db9.scope: Deactivated successfully.
Feb  2 12:25:26 np0005605476 podman[106853]: 2026-02-02 17:25:26.467911039 +0000 UTC m=+0.045135613 container create 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:25:26 np0005605476 systemd[1]: Started libpod-conmon-896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66.scope.
Feb  2 12:25:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:25:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d649a9508f09a0b95f918ee8e457e7db6e4b5d12acf5211c925165f2fd99ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d649a9508f09a0b95f918ee8e457e7db6e4b5d12acf5211c925165f2fd99ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d649a9508f09a0b95f918ee8e457e7db6e4b5d12acf5211c925165f2fd99ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d649a9508f09a0b95f918ee8e457e7db6e4b5d12acf5211c925165f2fd99ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:25:26 np0005605476 podman[106853]: 2026-02-02 17:25:26.441506677 +0000 UTC m=+0.018731311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:25:26 np0005605476 podman[106853]: 2026-02-02 17:25:26.553015549 +0000 UTC m=+0.130240123 container init 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:25:26 np0005605476 podman[106853]: 2026-02-02 17:25:26.55973594 +0000 UTC m=+0.136960494 container start 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:25:26 np0005605476 podman[106853]: 2026-02-02 17:25:26.562740945 +0000 UTC m=+0.139965509 container attach 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:25:26 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb  2 12:25:26 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb  2 12:25:26 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb  2 12:25:26 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb  2 12:25:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:26 np0005605476 python3.9[107001]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 12:25:27 np0005605476 lvm[107098]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:25:27 np0005605476 lvm[107098]: VG ceph_vg0 finished
Feb  2 12:25:27 np0005605476 lvm[107099]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:25:27 np0005605476 lvm[107099]: VG ceph_vg1 finished
Feb  2 12:25:27 np0005605476 lvm[107101]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:25:27 np0005605476 lvm[107101]: VG ceph_vg2 finished
Feb  2 12:25:27 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb  2 12:25:27 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb  2 12:25:27 np0005605476 exciting_austin[106893]: {}
Feb  2 12:25:27 np0005605476 systemd[1]: libpod-896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66.scope: Deactivated successfully.
Feb  2 12:25:27 np0005605476 systemd[1]: libpod-896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66.scope: Consumed 1.080s CPU time.
Feb  2 12:25:27 np0005605476 podman[106853]: 2026-02-02 17:25:27.321776036 +0000 UTC m=+0.899000580 container died 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:25:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-18d649a9508f09a0b95f918ee8e457e7db6e4b5d12acf5211c925165f2fd99ca-merged.mount: Deactivated successfully.
Feb  2 12:25:27 np0005605476 podman[106853]: 2026-02-02 17:25:27.372270386 +0000 UTC m=+0.949494930 container remove 896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_austin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:25:27 np0005605476 systemd[1]: libpod-conmon-896f5f3ca02c03cc3111cc570881cd55f35839382881d10faccebf94152a6c66.scope: Deactivated successfully.
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:25:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb  2 12:25:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb  2 12:25:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb  2 12:25:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb  2 12:25:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:29 np0005605476 python3.9[107269]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:25:30 np0005605476 python3.9[107423]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:25:30 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb  2 12:25:30 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb  2 12:25:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:30 np0005605476 systemd[1]: session-34.scope: Deactivated successfully.
Feb  2 12:25:30 np0005605476 systemd[1]: session-34.scope: Consumed 1min 3.910s CPU time.
Feb  2 12:25:30 np0005605476 systemd-logind[799]: Session 34 logged out. Waiting for processes to exit.
Feb  2 12:25:30 np0005605476 systemd-logind[799]: Removed session 34.
Feb  2 12:25:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb  2 12:25:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb  2 12:25:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb  2 12:25:33 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb  2 12:25:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:34 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb  2 12:25:34 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb  2 12:25:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb  2 12:25:35 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb  2 12:25:36 np0005605476 systemd-logind[799]: New session 35 of user zuul.
Feb  2 12:25:36 np0005605476 systemd[1]: Started Session 35 of User zuul.
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:25:36
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr', 'backups', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control']
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:25:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:36 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb  2 12:25:36 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb  2 12:25:37 np0005605476 python3.9[107603]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:25:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:25:38 np0005605476 python3.9[107759]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 12:25:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb  2 12:25:38 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb  2 12:25:38 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb  2 12:25:38 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb  2 12:25:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb  2 12:25:38 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb  2 12:25:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:39 np0005605476 python3.9[107912]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:25:40 np0005605476 python3.9[107996]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 12:25:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:41 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb  2 12:25:41 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb  2 12:25:41 np0005605476 python3.9[108149]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:42 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb  2 12:25:42 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb  2 12:25:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb  2 12:25:42 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb  2 12:25:42 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb  2 12:25:42 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb  2 12:25:44 np0005605476 python3.9[108302]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:25:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:44 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb  2 12:25:44 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb  2 12:25:44 np0005605476 python3.9[108455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:25:45 np0005605476 python3.9[108607]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 12:25:46 np0005605476 python3.9[108757]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:25:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:46 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb  2 12:25:46 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:25:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:25:47 np0005605476 python3.9[108915]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:48 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb  2 12:25:48 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb  2 12:25:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:49 np0005605476 python3.9[109068]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:25:50 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb  2 12:25:50 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb  2 12:25:50 np0005605476 python3.9[109355]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 12:25:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:51 np0005605476 python3.9[109505]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:25:51 np0005605476 python3.9[109659]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:52 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb  2 12:25:52 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb  2 12:25:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:52 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb  2 12:25:52 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb  2 12:25:53 np0005605476 python3.9[109812]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:25:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:25:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:55 np0005605476 python3.9[109965]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:25:55 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb  2 12:25:55 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb  2 12:25:56 np0005605476 python3.9[110119]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb  2 12:25:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:56 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb  2 12:25:56 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb  2 12:25:56 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb  2 12:25:56 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb  2 12:25:57 np0005605476 systemd-logind[799]: Session 35 logged out. Waiting for processes to exit.
Feb  2 12:25:57 np0005605476 systemd[1]: session-35.scope: Deactivated successfully.
Feb  2 12:25:57 np0005605476 systemd[1]: session-35.scope: Consumed 15.593s CPU time.
Feb  2 12:25:57 np0005605476 systemd-logind[799]: Removed session 35.
Feb  2 12:25:57 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb  2 12:25:57 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb  2 12:25:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:25:59 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb  2 12:25:59 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb  2 12:25:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:00 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb  2 12:26:00 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb  2 12:26:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:01 np0005605476 systemd-logind[799]: New session 36 of user zuul.
Feb  2 12:26:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb  2 12:26:01 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb  2 12:26:01 np0005605476 systemd[1]: Started Session 36 of User zuul.
Feb  2 12:26:02 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb  2 12:26:02 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb  2 12:26:02 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb  2 12:26:02 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb  2 12:26:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:02 np0005605476 python3.9[110297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:26:03 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb  2 12:26:03 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb  2 12:26:03 np0005605476 python3.9[110451]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:26:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb  2 12:26:03 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb  2 12:26:03 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb  2 12:26:04 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb  2 12:26:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:04 np0005605476 python3.9[110644]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:26:05 np0005605476 systemd[1]: session-36.scope: Deactivated successfully.
Feb  2 12:26:05 np0005605476 systemd[1]: session-36.scope: Consumed 1.987s CPU time.
Feb  2 12:26:05 np0005605476 systemd-logind[799]: Session 36 logged out. Waiting for processes to exit.
Feb  2 12:26:05 np0005605476 systemd-logind[799]: Removed session 36.
Feb  2 12:26:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:26:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:26:07 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb  2 12:26:07 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb  2 12:26:07 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb  2 12:26:07 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb  2 12:26:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:10 np0005605476 systemd-logind[799]: New session 37 of user zuul.
Feb  2 12:26:10 np0005605476 systemd[1]: Started Session 37 of User zuul.
Feb  2 12:26:10 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb  2 12:26:10 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb  2 12:26:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:11 np0005605476 python3.9[110824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:26:11 np0005605476 python3.9[110978]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:26:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb  2 12:26:12 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb  2 12:26:12 np0005605476 python3.9[111134]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:26:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:12 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb  2 12:26:12 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb  2 12:26:13 np0005605476 python3.9[111218]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:26:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb  2 12:26:13 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb  2 12:26:13 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb  2 12:26:13 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb  2 12:26:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:15 np0005605476 python3.9[111371]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:26:15 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb  2 12:26:15 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb  2 12:26:16 np0005605476 python3.9[111566]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:26:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:17 np0005605476 python3.9[111718]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:26:17 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb  2 12:26:17 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb  2 12:26:17 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb  2 12:26:18 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb  2 12:26:18 np0005605476 python3.9[111883]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:26:18 np0005605476 python3.9[111961]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:26:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb  2 12:26:18 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb  2 12:26:18 np0005605476 python3.9[112113]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:26:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:19 np0005605476 python3.9[112191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:26:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb  2 12:26:19 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb  2 12:26:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb  2 12:26:19 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb  2 12:26:20 np0005605476 python3.9[112343]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:26:20 np0005605476 python3.9[112495]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:26:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:21 np0005605476 python3.9[112647]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:26:21 np0005605476 python3.9[112799]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:26:22 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb  2 12:26:22 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb  2 12:26:22 np0005605476 python3.9[112951]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:26:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:22 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb  2 12:26:22 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb  2 12:26:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:24 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb  2 12:26:24 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb  2 12:26:24 np0005605476 python3.9[113104]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:26:25 np0005605476 python3.9[113258]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:26:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb  2 12:26:25 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb  2 12:26:26 np0005605476 python3.9[113410]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:26:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:26 np0005605476 python3.9[113562]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:26:26 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb  2 12:26:26 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb  2 12:26:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb  2 12:26:27 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb  2 12:26:27 np0005605476 python3.9[113715]: ansible-service_facts Invoked
Feb  2 12:26:27 np0005605476 network[113782]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:26:27 np0005605476 network[113783]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:26:27 np0005605476 network[113784]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:26:27 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb  2 12:26:27 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.411428145 +0000 UTC m=+0.042470498 container create c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:26:28 np0005605476 systemd[76566]: Created slice User Background Tasks Slice.
Feb  2 12:26:28 np0005605476 systemd[76566]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 12:26:28 np0005605476 systemd[1]: Started libpod-conmon-c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68.scope.
Feb  2 12:26:28 np0005605476 systemd[76566]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 12:26:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.392205119 +0000 UTC m=+0.023247502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:26:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb  2 12:26:28 np0005605476 ceph-osd[87792]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.589253701 +0000 UTC m=+0.220296084 container init c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.595318529 +0000 UTC m=+0.226360872 container start c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:26:28 np0005605476 nifty_villani[113913]: 167 167
Feb  2 12:26:28 np0005605476 systemd[1]: libpod-c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68.scope: Deactivated successfully.
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.61790623 +0000 UTC m=+0.248948593 container attach c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.618214609 +0000 UTC m=+0.249256962 container died c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:26:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:26:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e29e0201e7a140389515a767d7371d9f9ca5cbc4e3073764ed8bab0a6fa3e1a7-merged.mount: Deactivated successfully.
Feb  2 12:26:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:26:28 np0005605476 podman[113895]: 2026-02-02 17:26:28.786050624 +0000 UTC m=+0.417092977 container remove c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_villani, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:26:28 np0005605476 systemd[1]: libpod-conmon-c707adc9bdd053ad8fbe8cf3688a555a212b422a10b0dfd9c30ed45d25222f68.scope: Deactivated successfully.
Feb  2 12:26:28 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb  2 12:26:28 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb  2 12:26:28 np0005605476 podman[113939]: 2026-02-02 17:26:28.940225946 +0000 UTC m=+0.064402219 container create 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:26:28 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb  2 12:26:28 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb  2 12:26:28 np0005605476 podman[113939]: 2026-02-02 17:26:28.903659162 +0000 UTC m=+0.027835525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:26:29 np0005605476 systemd[1]: Started libpod-conmon-6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f.scope.
Feb  2 12:26:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:26:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:26:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:26:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:26:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:26:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:26:29 np0005605476 podman[113939]: 2026-02-02 17:26:29.056659127 +0000 UTC m=+0.180835430 container init 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:26:29 np0005605476 podman[113939]: 2026-02-02 17:26:29.069421133 +0000 UTC m=+0.193597406 container start 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:26:29 np0005605476 podman[113939]: 2026-02-02 17:26:29.073327494 +0000 UTC m=+0.197503787 container attach 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:26:29 np0005605476 loving_elgamal[113955]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:26:29 np0005605476 loving_elgamal[113955]: --> All data devices are unavailable
Feb  2 12:26:29 np0005605476 podman[113939]: 2026-02-02 17:26:29.530618528 +0000 UTC m=+0.654794801 container died 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:26:29 np0005605476 systemd[1]: libpod-6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f.scope: Deactivated successfully.
Feb  2 12:26:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0c576365a3f2834cd3f9e610809cdc1b8467171c1fa46931c8b67a8d857fe30c-merged.mount: Deactivated successfully.
Feb  2 12:26:29 np0005605476 podman[113939]: 2026-02-02 17:26:29.570594958 +0000 UTC m=+0.694771231 container remove 6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:26:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:26:29 np0005605476 systemd[1]: libpod-conmon-6c390cb6b996a7c9f6977da8ddcde4f4dda8b7cc15f0a1b5b4d7a179b7c38b9f.scope: Deactivated successfully.
Feb  2 12:27:07 np0005605476 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 12:27:07 np0005605476 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 12:27:07 np0005605476 systemd[1]: Finished Create netns directory.
Feb  2 12:27:08 np0005605476 rsyslogd[1006]: imjournal: 600 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb  2 12:27:08 np0005605476 python3.9[119247]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:27:08 np0005605476 network[119264]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:27:08 np0005605476 network[119265]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:27:08 np0005605476 network[119266]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:27:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:10 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb  2 12:27:10 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb  2 12:27:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb  2 12:27:10 np0005605476 ceph-osd[85696]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb  2 12:27:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:11 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb  2 12:27:11 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb  2 12:27:12 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb  2 12:27:12 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb  2 12:27:12 np0005605476 python3.9[119528]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:12 np0005605476 python3.9[119606]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:13 np0005605476 python3.9[119758]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:14 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb  2 12:27:14 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb  2 12:27:14 np0005605476 python3.9[119910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:14 np0005605476 python3.9[119988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:15 np0005605476 python3.9[120140]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 12:27:15 np0005605476 systemd[1]: Starting Time & Date Service...
Feb  2 12:27:15 np0005605476 systemd[1]: Started Time & Date Service.
Feb  2 12:27:16 np0005605476 python3.9[120296]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:16 np0005605476 python3.9[120448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:17 np0005605476 python3.9[120526]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:17 np0005605476 python3.9[120678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:18 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb  2 12:27:18 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb  2 12:27:18 np0005605476 python3.9[120756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bjlfoazg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:18 np0005605476 python3.9[120908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:19 np0005605476 python3.9[120986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:19 np0005605476 python3.9[121138]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:27:20 np0005605476 python3[121291]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 12:27:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:21 np0005605476 python3.9[121443]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:21 np0005605476 python3.9[121521]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:21 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb  2 12:27:22 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb  2 12:27:22 np0005605476 python3.9[121673]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:23 np0005605476 python3.9[121798]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053241.9836478-308-252440775087048/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:23 np0005605476 python3.9[121950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:24 np0005605476 python3.9[122028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:24 np0005605476 python3.9[122180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:25 np0005605476 python3.9[122258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:25 np0005605476 python3.9[122410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:26 np0005605476 python3.9[122488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:26 np0005605476 python3.9[122640]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:27:27 np0005605476 python3.9[122795]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:28 np0005605476 python3.9[122947]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:28 np0005605476 python3.9[123099]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:29 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb  2 12:27:29 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb  2 12:27:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:29 np0005605476 python3.9[123251]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 12:27:30 np0005605476 python3.9[123403]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 12:27:30 np0005605476 systemd[1]: session-39.scope: Deactivated successfully.
Feb  2 12:27:30 np0005605476 systemd[1]: session-39.scope: Consumed 24.687s CPU time.
Feb  2 12:27:30 np0005605476 systemd-logind[799]: Session 39 logged out. Waiting for processes to exit.
Feb  2 12:27:30 np0005605476 systemd-logind[799]: Removed session 39.
Feb  2 12:27:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb  2 12:27:31 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb  2 12:27:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb  2 12:27:32 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:27:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:27:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:27:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:27:33 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.121197993 +0000 UTC m=+0.045521697 container create 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:27:33 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb  2 12:27:33 np0005605476 systemd[1]: Started libpod-conmon-67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136.scope.
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.102089187 +0000 UTC m=+0.026412891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.207650712 +0000 UTC m=+0.131974396 container init 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.212451819 +0000 UTC m=+0.136775483 container start 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.215226371 +0000 UTC m=+0.139550065 container attach 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:27:33 np0005605476 loving_lehmann[123587]: 167 167
Feb  2 12:27:33 np0005605476 systemd[1]: libpod-67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136.scope: Deactivated successfully.
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.216658053 +0000 UTC m=+0.140981717 container died 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:27:33 np0005605476 systemd[1]: var-lib-containers-storage-overlay-12483c7b90cda00e660814e322ca582e3bd47062d20ffd1caeaebff5787aa93c-merged.mount: Deactivated successfully.
Feb  2 12:27:33 np0005605476 podman[123571]: 2026-02-02 17:27:33.259235093 +0000 UTC m=+0.183558757 container remove 67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:27:33 np0005605476 systemd[1]: libpod-conmon-67dc1b7fc35c4d97fab7ad327e2c269bce9ccb40456a563ffb515dc11f807136.scope: Deactivated successfully.
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.383587338 +0000 UTC m=+0.037992559 container create 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:27:33 np0005605476 systemd[1]: Started libpod-conmon-09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b.scope.
Feb  2 12:27:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.363365367 +0000 UTC m=+0.017770598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.4844691 +0000 UTC m=+0.138874321 container init 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.492735124 +0000 UTC m=+0.147140305 container start 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.496826506 +0000 UTC m=+0.151231707 container attach 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:27:33 np0005605476 affectionate_banach[123627]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:27:33 np0005605476 affectionate_banach[123627]: --> All data devices are unavailable
Feb  2 12:27:33 np0005605476 systemd[1]: libpod-09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b.scope: Deactivated successfully.
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.920848099 +0000 UTC m=+0.575253280 container died 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:27:33 np0005605476 systemd[1]: var-lib-containers-storage-overlay-05c01c6f61d5da6657ed8a7a4526c22fa603ad67499855f412455ac32439bd0c-merged.mount: Deactivated successfully.
Feb  2 12:27:33 np0005605476 podman[123611]: 2026-02-02 17:27:33.959770037 +0000 UTC m=+0.614175218 container remove 09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:27:33 np0005605476 systemd[1]: libpod-conmon-09d0d3b0fa4676698c2b0e68c72c88d0b5edf9343cf18f08295f42fa25c6ee8b.scope: Deactivated successfully.
Feb  2 12:27:34 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb  2 12:27:34 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.35373644 +0000 UTC m=+0.040257440 container create 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:27:34 np0005605476 systemd[1]: Started libpod-conmon-398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0.scope.
Feb  2 12:27:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.413695978 +0000 UTC m=+0.100217018 container init 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.419800074 +0000 UTC m=+0.106321074 container start 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:27:34 np0005605476 tender_villani[123737]: 167 167
Feb  2 12:27:34 np0005605476 systemd[1]: libpod-398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0.scope: Deactivated successfully.
Feb  2 12:27:34 np0005605476 conmon[123737]: conmon 398bef980fd7ec7a5270 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0.scope/container/memory.events
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.424773065 +0000 UTC m=+0.111294295 container attach 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.425134953 +0000 UTC m=+0.111655993 container died 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.336087066 +0000 UTC m=+0.022608106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-cb5c60c492b05e44bf6ea0d2012236f51cc1cab3ce082cd1b8a22388775a7be7-merged.mount: Deactivated successfully.
Feb  2 12:27:34 np0005605476 podman[123721]: 2026-02-02 17:27:34.453943416 +0000 UTC m=+0.140464416 container remove 398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:27:34 np0005605476 systemd[1]: libpod-conmon-398bef980fd7ec7a52705221c08c19398593cdaac6889651c7ef029223d107e0.scope: Deactivated successfully.
Feb  2 12:27:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:34 np0005605476 podman[123761]: 2026-02-02 17:27:34.600422725 +0000 UTC m=+0.045286181 container create 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:27:34 np0005605476 systemd[1]: Started libpod-conmon-5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90.scope.
Feb  2 12:27:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edacd3fbaed4696f5f08ec6eee4927523488be1c70616dc07de1a1d90ffeaff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edacd3fbaed4696f5f08ec6eee4927523488be1c70616dc07de1a1d90ffeaff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edacd3fbaed4696f5f08ec6eee4927523488be1c70616dc07de1a1d90ffeaff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edacd3fbaed4696f5f08ec6eee4927523488be1c70616dc07de1a1d90ffeaff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:34 np0005605476 podman[123761]: 2026-02-02 17:27:34.583791694 +0000 UTC m=+0.028655190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:34 np0005605476 podman[123761]: 2026-02-02 17:27:34.687265933 +0000 UTC m=+0.132129429 container init 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:27:34 np0005605476 podman[123761]: 2026-02-02 17:27:34.691711663 +0000 UTC m=+0.136575129 container start 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:27:34 np0005605476 podman[123761]: 2026-02-02 17:27:34.695036487 +0000 UTC m=+0.139899963 container attach 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:27:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]: {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    "0": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "devices": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "/dev/loop3"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            ],
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_name": "ceph_lv0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_size": "21470642176",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "name": "ceph_lv0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "tags": {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_name": "ceph",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.crush_device_class": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.encrypted": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.objectstore": "bluestore",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_id": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.vdo": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.with_tpm": "0"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            },
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "vg_name": "ceph_vg0"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        }
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    ],
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    "1": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "devices": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "/dev/loop4"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            ],
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_name": "ceph_lv1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_size": "21470642176",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "name": "ceph_lv1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "tags": {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_name": "ceph",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.crush_device_class": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.encrypted": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.objectstore": "bluestore",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_id": "1",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.vdo": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.with_tpm": "0"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            },
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "vg_name": "ceph_vg1"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        }
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    ],
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    "2": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "devices": [
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "/dev/loop5"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            ],
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_name": "ceph_lv2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_size": "21470642176",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "name": "ceph_lv2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "tags": {
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.cluster_name": "ceph",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.crush_device_class": "",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.encrypted": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.objectstore": "bluestore",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osd_id": "2",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.vdo": "0",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:                "ceph.with_tpm": "0"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            },
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "type": "block",
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:            "vg_name": "ceph_vg2"
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:        }
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]:    ]
Feb  2 12:27:34 np0005605476 infallible_yalow[123778]: }
Feb  2 12:27:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb  2 12:27:35 np0005605476 systemd[1]: libpod-5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90.scope: Deactivated successfully.
Feb  2 12:27:35 np0005605476 podman[123761]: 2026-02-02 17:27:35.015311815 +0000 UTC m=+0.460175371 container died 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:27:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb  2 12:27:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2edacd3fbaed4696f5f08ec6eee4927523488be1c70616dc07de1a1d90ffeaff-merged.mount: Deactivated successfully.
Feb  2 12:27:35 np0005605476 podman[123761]: 2026-02-02 17:27:35.060626286 +0000 UTC m=+0.505489752 container remove 5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 systemd[1]: libpod-conmon-5e711e1be0bd2cbe6fd4ae64213076497175a366d6ccd0c6bdec4c8fd9fe9b90.scope: Deactivated successfully.
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.457284618 +0000 UTC m=+0.038124251 container create df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 systemd[1]: Started libpod-conmon-df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d.scope.
Feb  2 12:27:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.511014688 +0000 UTC m=+0.091854351 container init df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.519540738 +0000 UTC m=+0.100380381 container start df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.523352823 +0000 UTC m=+0.104192486 container attach df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 nifty_sinoussi[123878]: 167 167
Feb  2 12:27:35 np0005605476 systemd[1]: libpod-df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d.scope: Deactivated successfully.
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.524746074 +0000 UTC m=+0.105585717 container died df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.44256697 +0000 UTC m=+0.023406613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b9c207ea6158213c6afc8e3eea32bb8f33fd943870a6b93ab0f944815777dd2f-merged.mount: Deactivated successfully.
Feb  2 12:27:35 np0005605476 podman[123861]: 2026-02-02 17:27:35.581269225 +0000 UTC m=+0.162108898 container remove df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 systemd[1]: libpod-conmon-df1ddff7fe38907dc90fad4fe5219328cd46a49a37128e9ceb6579004b45a03d.scope: Deactivated successfully.
Feb  2 12:27:35 np0005605476 systemd-logind[799]: New session 40 of user zuul.
Feb  2 12:27:35 np0005605476 systemd[1]: Started Session 40 of User zuul.
Feb  2 12:27:35 np0005605476 podman[123904]: 2026-02-02 17:27:35.745823468 +0000 UTC m=+0.055966550 container create 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 systemd[1]: Started libpod-conmon-35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d.scope.
Feb  2 12:27:35 np0005605476 podman[123904]: 2026-02-02 17:27:35.725117306 +0000 UTC m=+0.035260418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:27:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:27:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d97c6960a97da61ea3b0caac4d935ebf82de72404e2b0f9c97ea4b277a463e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d97c6960a97da61ea3b0caac4d935ebf82de72404e2b0f9c97ea4b277a463e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d97c6960a97da61ea3b0caac4d935ebf82de72404e2b0f9c97ea4b277a463e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d97c6960a97da61ea3b0caac4d935ebf82de72404e2b0f9c97ea4b277a463e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:27:35 np0005605476 podman[123904]: 2026-02-02 17:27:35.849210575 +0000 UTC m=+0.159353717 container init 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:27:35 np0005605476 podman[123904]: 2026-02-02 17:27:35.858141745 +0000 UTC m=+0.168284817 container start 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:27:35 np0005605476 podman[123904]: 2026-02-02 17:27:35.889615547 +0000 UTC m=+0.199758639 container attach 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:27:35 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb  2 12:27:36 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb  2 12:27:36 np0005605476 lvm[124153]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:27:36 np0005605476 lvm[124153]: VG ceph_vg1 finished
Feb  2 12:27:36 np0005605476 lvm[124150]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:27:36 np0005605476 lvm[124150]: VG ceph_vg0 finished
Feb  2 12:27:36 np0005605476 python3.9[124130]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 12:27:36 np0005605476 lvm[124155]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:27:36 np0005605476 lvm[124155]: VG ceph_vg2 finished
Feb  2 12:27:36 np0005605476 boring_carver[123922]: {}
Feb  2 12:27:36 np0005605476 systemd[1]: libpod-35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d.scope: Deactivated successfully.
Feb  2 12:27:36 np0005605476 systemd[1]: libpod-35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d.scope: Consumed 1.131s CPU time.
Feb  2 12:27:36 np0005605476 podman[123904]: 2026-02-02 17:27:36.647102902 +0000 UTC m=+0.957245964 container died 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:27:36 np0005605476 systemd[1]: var-lib-containers-storage-overlay-41d97c6960a97da61ea3b0caac4d935ebf82de72404e2b0f9c97ea4b277a463e-merged.mount: Deactivated successfully.
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:27:36
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log']
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:27:36 np0005605476 podman[123904]: 2026-02-02 17:27:36.696182298 +0000 UTC m=+1.006325400 container remove 35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:27:36 np0005605476 systemd[1]: libpod-conmon-35b54c12bbb06e4a3857cb54b42d7bb4f94ec8a4ceeee9ef148f806b9511880d.scope: Deactivated successfully.
Feb  2 12:27:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:27:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:27:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:27:37 np0005605476 python3.9[124347]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:27:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:27:38 np0005605476 python3.9[124501]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb  2 12:27:38 np0005605476 python3.9[124653]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.5_4phq1f follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:27:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:39 np0005605476 python3.9[124778]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.5_4phq1f mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053258.213458-44-88213209845722/.source.5_4phq1f _original_basename=.tnx37gwh follow=False checksum=c75e36c45243f063d6c598cdd468429150872ebc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:40 np0005605476 python3.9[124930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:27:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:41 np0005605476 python3.9[125082]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ6SKSWlzfPU7f7RjN8CFlU375FDDWhb5oRWZrAT4j1px0qJtS9EUoEsYN3Svj45HwgIj7T4L2iiV4fqCeTgFZPq/4EMyOiuIcb6mPFRhO5rV8GFKR83vwwdSnltqS+Wh83m6FsFc38evlSQHewlszztQW5H3sJH8XzOYPvSSAbpwGfukhBmr4nL9btc77XALuIi4XdgZprbGHwAg9IsqqROASIaJ7KZ7Aizr7aOJPuvetUYoHBykOQ4ka4Y8nPexVqjyguk8Pszdv+VNX+6/UEEM2DLGmfuNElBpHOLwRHdXra75FcC3zj4MOyWyvK4HvoiKK9rw0lzyZvlQZK/qeAefgDaAkaJXSjdUDjst9yuKFEcwC9YlIveLG7jq9sPfgSGJwVTBiVoCxNC+QHpbYs6SP+xnDeOndwkBraidIR8ruBZKu+ywEaVpjYoGrastkBD0CL6VfGw9sNHsWDjrw7Cbg6kuuzjTSP+VCj2oOWZv9ZofzACpCGftzfZggo+s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPKrsA58m69x7APjvzXvaVbYTk7XdsFY3HNzsBZWPxir#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCo16m/qvXepjRYVF6qP7nMQdK8bChxoaiXB4sppkC0pGQbaJTq3OB+7vpaqEYym/PNGusm1gpPqmortJLj1DbU=#012 create=True mode=0644 path=/tmp/ansible.5_4phq1f state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:41 np0005605476 python3.9[125234]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.5_4phq1f' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:27:42 np0005605476 python3.9[125388]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.5_4phq1f state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:42 np0005605476 systemd[1]: session-40.scope: Deactivated successfully.
Feb  2 12:27:42 np0005605476 systemd[1]: session-40.scope: Consumed 4.269s CPU time.
Feb  2 12:27:42 np0005605476 systemd-logind[799]: Session 40 logged out. Waiting for processes to exit.
Feb  2 12:27:42 np0005605476 systemd-logind[799]: Removed session 40.
Feb  2 12:27:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb  2 12:27:43 np0005605476 ceph-osd[86737]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb  2 12:27:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:45 np0005605476 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 12:27:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:27:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:27:47 np0005605476 systemd-logind[799]: New session 41 of user zuul.
Feb  2 12:27:47 np0005605476 systemd[1]: Started Session 41 of User zuul.
Feb  2 12:27:48 np0005605476 python3.9[125568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:27:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:49 np0005605476 python3.9[125724]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 12:27:50 np0005605476 python3.9[125878]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:27:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:51 np0005605476 python3.9[126031]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:27:52 np0005605476 python3.9[126184]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:27:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:52 np0005605476 python3.9[126336]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:27:53 np0005605476 systemd[1]: session-41.scope: Deactivated successfully.
Feb  2 12:27:53 np0005605476 systemd[1]: session-41.scope: Consumed 3.247s CPU time.
Feb  2 12:27:53 np0005605476 systemd-logind[799]: Session 41 logged out. Waiting for processes to exit.
Feb  2 12:27:53 np0005605476 systemd-logind[799]: Removed session 41.
Feb  2 12:27:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:27:59 np0005605476 systemd-logind[799]: New session 42 of user zuul.
Feb  2 12:27:59 np0005605476 systemd[1]: Started Session 42 of User zuul.
Feb  2 12:27:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:27:59 np0005605476 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Feb  2 12:27:59 np0005605476 systemd[1]: session-17.scope: Deactivated successfully.
Feb  2 12:27:59 np0005605476 systemd[1]: session-17.scope: Consumed 1min 23.087s CPU time.
Feb  2 12:27:59 np0005605476 systemd-logind[799]: Removed session 17.
Feb  2 12:28:00 np0005605476 python3.9[126514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:28:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:00 np0005605476 python3.9[126670]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:28:01 np0005605476 python3.9[126754]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 12:28:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:03 np0005605476 python3.9[126905]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:28:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:04 np0005605476 python3.9[127056]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:28:05 np0005605476 python3.9[127206]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:28:06 np0005605476 python3.9[127356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:28:06 np0005605476 systemd[1]: session-42.scope: Deactivated successfully.
Feb  2 12:28:06 np0005605476 systemd[1]: session-42.scope: Consumed 5.418s CPU time.
Feb  2 12:28:06 np0005605476 systemd-logind[799]: Session 42 logged out. Waiting for processes to exit.
Feb  2 12:28:06 np0005605476 systemd-logind[799]: Removed session 42.
Feb  2 12:28:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:12 np0005605476 systemd-logind[799]: New session 43 of user zuul.
Feb  2 12:28:12 np0005605476 systemd[1]: Started Session 43 of User zuul.
Feb  2 12:28:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:13 np0005605476 python3.9[127535]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:28:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:14 np0005605476 python3.9[127691]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:15 np0005605476 python3.9[127843]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:15 np0005605476 python3.9[127995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:16 np0005605476 python3.9[128118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053295.3950377-60-136143760070219/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=77ee4f164a78bd243e0bb35ac4216a38bf23ac01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:17 np0005605476 python3.9[128270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:17 np0005605476 python3.9[128393]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053296.6928394-60-98170179612103/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=424d41c7c9076603f9bb831a9d3ebda717f8f917 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:18 np0005605476 python3.9[128545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:18 np0005605476 python3.9[128668]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053297.7493558-60-27358128636891/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7bd3c260f2b7ed76ea8e53bf01e6f6589be1fed7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:19 np0005605476 python3.9[128820]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:19 np0005605476 python3.9[128972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:20 np0005605476 python3.9[129124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:20 np0005605476 python3.9[129247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053300.1580188-119-108277060879935/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1166ccac6db2c66956722e4ab8c6ac04b55dcb53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:21 np0005605476 python3.9[129399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:21 np0005605476 python3.9[129522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053301.1239834-119-109384351583898/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a0d1bc387330f9e82ff7aefc1081fa58e18d1faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:22 np0005605476 python3.9[129674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:23 np0005605476 python3.9[129797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053302.1278417-119-169945554454498/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0328437a62f5fd5796d5048a065bc2e1f106b479 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:23 np0005605476 python3.9[129949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:24 np0005605476 python3.9[130101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:24 np0005605476 python3.9[130253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:25 np0005605476 python3.9[130376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053304.3250434-178-160123844485149/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=303ade15c037891a55246a57da52316d7066540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:25 np0005605476 python3.9[130528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:26 np0005605476 python3.9[130651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053305.4927578-178-8852316685279/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a0d1bc387330f9e82ff7aefc1081fa58e18d1faa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:26 np0005605476 python3.9[130803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:27 np0005605476 python3.9[130926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053306.430445-178-59225094685854/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1be57dfaf999c9c6716ea01e7279684d1096d68c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:28 np0005605476 python3.9[131078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:29 np0005605476 python3.9[131230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:29 np0005605476 python3.9[131353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053309.0490382-246-257198731643347/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:30 np0005605476 python3.9[131505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:31 np0005605476 python3.9[131657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:31 np0005605476 python3.9[131780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053310.6286914-270-11297013962388/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:32 np0005605476 python3.9[131932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:32 np0005605476 python3.9[132084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:33 np0005605476 python3.9[132207]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053312.3382702-294-24928705954449/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:33 np0005605476 python3.9[132359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:34 np0005605476 python3.9[132511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:34 np0005605476 python3.9[132634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053313.9617455-318-225733128168077/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:35 np0005605476 python3.9[132786]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:36 np0005605476 python3.9[132938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:36 np0005605476 python3.9[133061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053315.6234953-342-86859149100679/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:28:36
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'vms', 'default.rgw.control']
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:28:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:37 np0005605476 python3.9[133236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:28:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:28:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:28:37 np0005605476 python3.9[133446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.756525997 +0000 UTC m=+0.038663301 container create 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:28:37 np0005605476 systemd[1]: Started libpod-conmon-054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6.scope.
Feb  2 12:28:37 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.73985376 +0000 UTC m=+0.021991084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.840704226 +0000 UTC m=+0.122841550 container init 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.846546127 +0000 UTC m=+0.128683441 container start 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.850182416 +0000 UTC m=+0.132319750 container attach 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:28:37 np0005605476 infallible_colden[133619]: 167 167
Feb  2 12:28:37 np0005605476 systemd[1]: libpod-054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6.scope: Deactivated successfully.
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.852578702 +0000 UTC m=+0.134716006 container died 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:28:37 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3774f857670b092b93223646fc5de0fbeb6ae0668aa42baaa564e29a67d3be44-merged.mount: Deactivated successfully.
Feb  2 12:28:37 np0005605476 podman[133562]: 2026-02-02 17:28:37.889171066 +0000 UTC m=+0.171308370 container remove 054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:28:37 np0005605476 systemd[1]: libpod-conmon-054ff54f8aae5a96649d84456fe4be587ab2b2413002ff87471831672d9f99a6.scope: Deactivated successfully.
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.003310687 +0000 UTC m=+0.037820699 container create 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:28:38 np0005605476 systemd[1]: Started libpod-conmon-6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c.scope.
Feb  2 12:28:38 np0005605476 python3.9[133659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053317.1701682-366-25719825593935/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=774f06a199fb2742887e8c8ea796aa43397ccb88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:37.983992647 +0000 UTC m=+0.018502689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.090137528 +0000 UTC m=+0.124647550 container init 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.100319038 +0000 UTC m=+0.134829070 container start 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.103520506 +0000 UTC m=+0.138030548 container attach 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:28:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:28:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:28:38 np0005605476 systemd-logind[799]: Session 43 logged out. Waiting for processes to exit.
Feb  2 12:28:38 np0005605476 systemd[1]: session-43.scope: Deactivated successfully.
Feb  2 12:28:38 np0005605476 systemd[1]: session-43.scope: Consumed 18.610s CPU time.
Feb  2 12:28:38 np0005605476 systemd-logind[799]: Removed session 43.
Feb  2 12:28:38 np0005605476 exciting_tesla[133688]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:28:38 np0005605476 exciting_tesla[133688]: --> All data devices are unavailable
Feb  2 12:28:38 np0005605476 systemd[1]: libpod-6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c.scope: Deactivated successfully.
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.488356872 +0000 UTC m=+0.522866894 container died 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:28:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c9f6044f81a686dea6fd04166bfd3bfe96bc1eea89c7b464af8d928fd61da3c3-merged.mount: Deactivated successfully.
Feb  2 12:28:38 np0005605476 podman[133672]: 2026-02-02 17:28:38.524864613 +0000 UTC m=+0.559374625 container remove 6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tesla, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:28:38 np0005605476 systemd[1]: libpod-conmon-6429d333c9bc7d5f928a76a2017bfe2c7335231d0580a756ae1c732cf479e83c.scope: Deactivated successfully.
Feb  2 12:28:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.891951772 +0000 UTC m=+0.031881265 container create 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:28:38 np0005605476 systemd[1]: Started libpod-conmon-918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630.scope.
Feb  2 12:28:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.942196621 +0000 UTC m=+0.082126124 container init 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.947677891 +0000 UTC m=+0.087607384 container start 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.950625302 +0000 UTC m=+0.090554795 container attach 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:28:38 np0005605476 goofy_mahavira[133821]: 167 167
Feb  2 12:28:38 np0005605476 systemd[1]: libpod-918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630.scope: Deactivated successfully.
Feb  2 12:28:38 np0005605476 conmon[133821]: conmon 918f2adefd1013ce414d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630.scope/container/memory.events
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.953013537 +0000 UTC m=+0.092943030 container died 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:28:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d15f5e62633baadf1a43fa6c988aaa55b5740e1c0538c36290525a224ad8cc15-merged.mount: Deactivated successfully.
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.878896624 +0000 UTC m=+0.018826117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:38 np0005605476 podman[133805]: 2026-02-02 17:28:38.986538997 +0000 UTC m=+0.126468490 container remove 918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:28:38 np0005605476 systemd[1]: libpod-conmon-918f2adefd1013ce414d771b65b84d861fa2d49dd3c6dc4babdcff77d341a630.scope: Deactivated successfully.
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.105422428 +0000 UTC m=+0.036845822 container create 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:28:39 np0005605476 systemd[1]: Started libpod-conmon-5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57.scope.
Feb  2 12:28:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937c3626f6ccecdadcf2ea01f3248be3ab11386a17aa4d764c7206abf0474c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937c3626f6ccecdadcf2ea01f3248be3ab11386a17aa4d764c7206abf0474c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937c3626f6ccecdadcf2ea01f3248be3ab11386a17aa4d764c7206abf0474c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937c3626f6ccecdadcf2ea01f3248be3ab11386a17aa4d764c7206abf0474c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.176860278 +0000 UTC m=+0.108283692 container init 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.088309659 +0000 UTC m=+0.019733063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.186145462 +0000 UTC m=+0.117568846 container start 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.190127562 +0000 UTC m=+0.121550976 container attach 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:28:39 np0005605476 epic_hawking[133860]: {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    "0": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "devices": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "/dev/loop3"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            ],
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_name": "ceph_lv0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_size": "21470642176",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "name": "ceph_lv0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "tags": {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_name": "ceph",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.crush_device_class": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.encrypted": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.objectstore": "bluestore",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_id": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.vdo": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.with_tpm": "0"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            },
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "vg_name": "ceph_vg0"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        }
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    ],
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    "1": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "devices": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "/dev/loop4"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            ],
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_name": "ceph_lv1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_size": "21470642176",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "name": "ceph_lv1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "tags": {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_name": "ceph",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.crush_device_class": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.encrypted": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.objectstore": "bluestore",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_id": "1",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.vdo": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.with_tpm": "0"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            },
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "vg_name": "ceph_vg1"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        }
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    ],
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    "2": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "devices": [
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "/dev/loop5"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            ],
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_name": "ceph_lv2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_size": "21470642176",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "name": "ceph_lv2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "tags": {
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.cluster_name": "ceph",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.crush_device_class": "",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.encrypted": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.objectstore": "bluestore",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osd_id": "2",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.vdo": "0",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:                "ceph.with_tpm": "0"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            },
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "type": "block",
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:            "vg_name": "ceph_vg2"
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:        }
Feb  2 12:28:39 np0005605476 epic_hawking[133860]:    ]
Feb  2 12:28:39 np0005605476 epic_hawking[133860]: }
Feb  2 12:28:39 np0005605476 systemd[1]: libpod-5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57.scope: Deactivated successfully.
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.450674418 +0000 UTC m=+0.382097832 container died 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:28:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-937c3626f6ccecdadcf2ea01f3248be3ab11386a17aa4d764c7206abf0474c73-merged.mount: Deactivated successfully.
Feb  2 12:28:39 np0005605476 podman[133843]: 2026-02-02 17:28:39.482748928 +0000 UTC m=+0.414172322 container remove 5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:28:39 np0005605476 systemd[1]: libpod-conmon-5d56708bea19db2c288823f1fe018c334fb1c1d5a083c6769789c9ff76646d57.scope: Deactivated successfully.
Feb  2 12:28:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.86924833 +0000 UTC m=+0.033734196 container create cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:28:39 np0005605476 systemd[1]: Started libpod-conmon-cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6.scope.
Feb  2 12:28:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.920188167 +0000 UTC m=+0.084674063 container init cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.924437764 +0000 UTC m=+0.088923640 container start cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:28:39 np0005605476 sweet_cerf[133961]: 167 167
Feb  2 12:28:39 np0005605476 systemd[1]: libpod-cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6.scope: Deactivated successfully.
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.927726994 +0000 UTC m=+0.092212870 container attach cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.928621689 +0000 UTC m=+0.093107575 container died cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:28:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7c56088bb31ba0373764ca316f485c1898c852a474e7166cd0a1ed75bf18aec4-merged.mount: Deactivated successfully.
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.855176074 +0000 UTC m=+0.019661970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:39 np0005605476 podman[133944]: 2026-02-02 17:28:39.959028483 +0000 UTC m=+0.123514359 container remove cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:28:39 np0005605476 systemd[1]: libpod-conmon-cdeb43e59287f89c675e451143d601c51e4bd0b04c35f4f8e229d53fed3d45b6.scope: Deactivated successfully.
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.062164372 +0000 UTC m=+0.030839137 container create 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:28:40 np0005605476 systemd[1]: Started libpod-conmon-15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9.scope.
Feb  2 12:28:40 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:28:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5242c03d25db5fdfb2752c17c08a1ad5ed29d8d7dd69c8f24b6591a9fcbcd5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5242c03d25db5fdfb2752c17c08a1ad5ed29d8d7dd69c8f24b6591a9fcbcd5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5242c03d25db5fdfb2752c17c08a1ad5ed29d8d7dd69c8f24b6591a9fcbcd5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5242c03d25db5fdfb2752c17c08a1ad5ed29d8d7dd69c8f24b6591a9fcbcd5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.122743464 +0000 UTC m=+0.091418259 container init 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.128513072 +0000 UTC m=+0.097187847 container start 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.131120283 +0000 UTC m=+0.099795048 container attach 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.047402767 +0000 UTC m=+0.016077562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:28:40 np0005605476 lvm[134077]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:28:40 np0005605476 lvm[134077]: VG ceph_vg0 finished
Feb  2 12:28:40 np0005605476 lvm[134080]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:28:40 np0005605476 lvm[134080]: VG ceph_vg1 finished
Feb  2 12:28:40 np0005605476 lvm[134082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:28:40 np0005605476 lvm[134082]: VG ceph_vg2 finished
Feb  2 12:28:40 np0005605476 eager_germain[134001]: {}
Feb  2 12:28:40 np0005605476 systemd[1]: libpod-15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9.scope: Deactivated successfully.
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.800506295 +0000 UTC m=+0.769181060 container died 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:28:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f5242c03d25db5fdfb2752c17c08a1ad5ed29d8d7dd69c8f24b6591a9fcbcd5d-merged.mount: Deactivated successfully.
Feb  2 12:28:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:40 np0005605476 podman[133984]: 2026-02-02 17:28:40.837811858 +0000 UTC m=+0.806486623 container remove 15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:28:40 np0005605476 systemd[1]: libpod-conmon-15c2424c7fafff24b6cfb34665964b265e7ed7ffa77b889a05ef5b7f726ce8c9.scope: Deactivated successfully.
Feb  2 12:28:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:28:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:28:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:41 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:28:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:43 np0005605476 systemd-logind[799]: New session 44 of user zuul.
Feb  2 12:28:43 np0005605476 systemd[1]: Started Session 44 of User zuul.
Feb  2 12:28:44 np0005605476 python3.9[134276]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:44 np0005605476 python3.9[134428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:45 np0005605476 python3.9[134551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053324.188858-29-146850454646238/.source.conf _original_basename=ceph.conf follow=False checksum=8283e4a6746924ee76081d3df3c750323982f0ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:45 np0005605476 python3.9[134703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:28:46 np0005605476 python3.9[134826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053325.6310542-29-69259406162590/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=c05a45844c01ac516fc883d7d16b3b5808c36afe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:28:46 np0005605476 systemd[1]: session-44.scope: Deactivated successfully.
Feb  2 12:28:46 np0005605476 systemd[1]: session-44.scope: Consumed 2.186s CPU time.
Feb  2 12:28:46 np0005605476 systemd-logind[799]: Session 44 logged out. Waiting for processes to exit.
Feb  2 12:28:46 np0005605476 systemd-logind[799]: Removed session 44.
Feb  2 12:28:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:28:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:28:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:52 np0005605476 systemd-logind[799]: New session 45 of user zuul.
Feb  2 12:28:52 np0005605476 systemd[1]: Started Session 45 of User zuul.
Feb  2 12:28:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:53 np0005605476 python3.9[135004]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:28:54 np0005605476 python3.9[135160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:28:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:55 np0005605476 python3.9[135312]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:28:55 np0005605476 python3.9[135462]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:28:56 np0005605476 python3.9[135614]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 12:28:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:57 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb  2 12:28:58 np0005605476 python3.9[135770]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:28:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:28:58 np0005605476 python3.9[135854]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:28:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:01 np0005605476 python3.9[136007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:29:01 np0005605476 python3[136162]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb  2 12:29:02 np0005605476 python3.9[136314]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:03 np0005605476 python3.9[136466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:03 np0005605476 python3.9[136544]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:04 np0005605476 python3.9[136696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:04 np0005605476 python3.9[136774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uk9_nuuq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:05 np0005605476 python3.9[136926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:05 np0005605476 python3.9[137004]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:06 np0005605476 python3.9[137156]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:07 np0005605476 python3[137309]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:07 np0005605476 python3.9[137461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:08 np0005605476 python3.9[137586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053347.2847354-152-105024047304462/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:08 np0005605476 python3.9[137738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:09 np0005605476 python3.9[137863]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053348.5116823-167-168522391096398/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:09 np0005605476 python3.9[138015]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:10 np0005605476 python3.9[138140]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053349.5408466-182-85399797718215/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:10 np0005605476 python3.9[138292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:11 np0005605476 python3.9[138417]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053350.563114-197-52150004861425/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:12 np0005605476 python3.9[138569]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:12 np0005605476 python3.9[138694]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053351.6744382-212-176094279421268/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:13 np0005605476 python3.9[138846]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:13 np0005605476 python3.9[138998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:14 np0005605476 python3.9[139153]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:15 np0005605476 python3.9[139305]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:15 np0005605476 python3.9[139458]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:29:16 np0005605476 python3.9[139612]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:16 np0005605476 python3.9[139767]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:17 np0005605476 python3.9[139917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:29:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:18 np0005605476 python3.9[140070]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:18 np0005605476 ovs-vsctl[140071]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb  2 12:29:19 np0005605476 python3.9[140223]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:20 np0005605476 python3.9[140378]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:20 np0005605476 ovs-vsctl[140379]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb  2 12:29:20 np0005605476 python3.9[140529]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:29:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:21 np0005605476 python3.9[140683]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:21 np0005605476 python3.9[140835]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:22 np0005605476 python3.9[140913]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:22 np0005605476 python3.9[141065]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:23 np0005605476 python3.9[141143]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:23 np0005605476 python3.9[141295]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:24 np0005605476 python3.9[141447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:24 np0005605476 python3.9[141525]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:25 np0005605476 python3.9[141677]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:25 np0005605476 python3.9[141755]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:26 np0005605476 python3.9[141907]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:29:26 np0005605476 systemd[1]: Reloading.
Feb  2 12:29:26 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:29:26 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:29:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:27 np0005605476 python3.9[142097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:27 np0005605476 python3.9[142175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:28 np0005605476 python3.9[142327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:28 np0005605476 python3.9[142405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:29 np0005605476 python3.9[142557]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:29:29 np0005605476 systemd[1]: Reloading.
Feb  2 12:29:29 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:29:29 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.617805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369617904, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1543, "num_deletes": 252, "total_data_size": 2377742, "memory_usage": 2413048, "flush_reason": "Manual Compaction"}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369627642, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1372735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7383, "largest_seqno": 8925, "table_properties": {"data_size": 1367604, "index_size": 2335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13976, "raw_average_key_size": 20, "raw_value_size": 1355830, "raw_average_value_size": 1962, "num_data_blocks": 111, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053211, "oldest_key_time": 1770053211, "file_creation_time": 1770053369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9879 microseconds, and 5656 cpu microseconds.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.627705) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1372735 bytes OK
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.627726) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.628981) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.629039) EVENT_LOG_v1 {"time_micros": 1770053369628993, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.629079) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2370913, prev total WAL file size 2370913, number of live WAL files 2.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.629838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1340KB)], [20(7609KB)]
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369629901, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9164355, "oldest_snapshot_seqno": -1}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3363 keys, 7084926 bytes, temperature: kUnknown
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369659381, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7084926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7059402, "index_size": 16007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80531, "raw_average_key_size": 23, "raw_value_size": 6995576, "raw_average_value_size": 2080, "num_data_blocks": 710, "num_entries": 3363, "num_filter_entries": 3363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770053369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.659635) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7084926 bytes
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.661188) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 309.8 rd, 239.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.8) write-amplify(5.2) OK, records in: 3806, records dropped: 443 output_compression: NoCompression
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.661209) EVENT_LOG_v1 {"time_micros": 1770053369661199, "job": 6, "event": "compaction_finished", "compaction_time_micros": 29579, "compaction_time_cpu_micros": 11989, "output_level": 6, "num_output_files": 1, "total_output_size": 7084926, "num_input_records": 3806, "num_output_records": 3363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369661474, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053369662051, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.629739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.662206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.662214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.662216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.662218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:29:29.662220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:29:29 np0005605476 systemd[1]: Starting Create netns directory...
Feb  2 12:29:29 np0005605476 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 12:29:29 np0005605476 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 12:29:29 np0005605476 systemd[1]: Finished Create netns directory.
Feb  2 12:29:30 np0005605476 python3.9[142751]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:31 np0005605476 python3.9[142903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:31 np0005605476 python3.9[143026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053370.6650233-463-1720176648656/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:32 np0005605476 python3.9[143178]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:32 np0005605476 python3.9[143330]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:33 np0005605476 python3.9[143482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:33 np0005605476 python3.9[143605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053373.1535635-496-180884523337112/.source.json _original_basename=.gzi18khd follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:34 np0005605476 python3.9[143755]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:36 np0005605476 python3.9[144178]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:29:36
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:29:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:37 np0005605476 python3.9[144330]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:29:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:29:38 np0005605476 python3[144482]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 12:29:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:29:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:29:42 np0005605476 podman[144496]: 2026-02-02 17:29:42.667654599 +0000 UTC m=+4.312279060 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 12:29:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:29:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.680797676 +0000 UTC m=+0.031271301 container create abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:29:42 np0005605476 systemd[1]: Started libpod-conmon-abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f.scope.
Feb  2 12:29:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.747065678 +0000 UTC m=+0.097539333 container init abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.752862376 +0000 UTC m=+0.103336001 container start abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:29:42 np0005605476 friendly_blackwell[144767]: 167 167
Feb  2 12:29:42 np0005605476 systemd[1]: libpod-abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f.scope: Deactivated successfully.
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.757572534 +0000 UTC m=+0.108046159 container attach abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:29:42 np0005605476 conmon[144767]: conmon abee6f0fdb789fdd9d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f.scope/container/memory.events
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.758394676 +0000 UTC m=+0.108868311 container died abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.668221974 +0000 UTC m=+0.018695619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:42 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3ec65a0481451363d304db3ba6058a387631715ad30923fce8f8605230d13f4c-merged.mount: Deactivated successfully.
Feb  2 12:29:42 np0005605476 podman[144775]: 2026-02-02 17:29:42.781062912 +0000 UTC m=+0.048546941 container create 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 12:29:42 np0005605476 podman[144775]: 2026-02-02 17:29:42.751519969 +0000 UTC m=+0.019004018 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 12:29:42 np0005605476 python3[144482]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 12:29:42 np0005605476 podman[144734]: 2026-02-02 17:29:42.794453996 +0000 UTC m=+0.144927621 container remove abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:29:42 np0005605476 systemd[1]: libpod-conmon-abee6f0fdb789fdd9d29fc3174bda62b6966d85a8022e191d62592e3ad6d6c4f.scope: Deactivated successfully.
Feb  2 12:29:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:42 np0005605476 podman[144833]: 2026-02-02 17:29:42.896386347 +0000 UTC m=+0.032259358 container create e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:29:42 np0005605476 systemd[1]: Started libpod-conmon-e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206.scope.
Feb  2 12:29:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:42 np0005605476 podman[144833]: 2026-02-02 17:29:42.968339834 +0000 UTC m=+0.104212875 container init e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:29:42 np0005605476 podman[144833]: 2026-02-02 17:29:42.97226481 +0000 UTC m=+0.108137821 container start e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:29:42 np0005605476 podman[144833]: 2026-02-02 17:29:42.974913192 +0000 UTC m=+0.110786203 container attach e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:29:42 np0005605476 podman[144833]: 2026-02-02 17:29:42.878980684 +0000 UTC m=+0.014853695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:43 np0005605476 python3.9[145012]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:29:43 np0005605476 thirsty_golick[144873]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:29:43 np0005605476 thirsty_golick[144873]: --> All data devices are unavailable
Feb  2 12:29:43 np0005605476 systemd[1]: libpod-e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206.scope: Deactivated successfully.
Feb  2 12:29:43 np0005605476 podman[144833]: 2026-02-02 17:29:43.479844619 +0000 UTC m=+0.615717630 container died e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:29:43 np0005605476 systemd[1]: var-lib-containers-storage-overlay-64cd666abd8c3ec17dd65ebfcb6af4587585db34c5e67ca18f9e6bee74666f7d-merged.mount: Deactivated successfully.
Feb  2 12:29:43 np0005605476 podman[144833]: 2026-02-02 17:29:43.517205495 +0000 UTC m=+0.653078506 container remove e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:29:43 np0005605476 systemd[1]: libpod-conmon-e43bc275168d47cc00442bb2cbfd3acd3cf4e471045e460fe0399e9f32091206.scope: Deactivated successfully.
Feb  2 12:29:43 np0005605476 podman[145219]: 2026-02-02 17:29:43.931062606 +0000 UTC m=+0.068927185 container create 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:29:43 np0005605476 systemd[1]: Started libpod-conmon-634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110.scope.
Feb  2 12:29:43 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:43 np0005605476 podman[145219]: 2026-02-02 17:29:43.897526215 +0000 UTC m=+0.035390814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:43 np0005605476 podman[145219]: 2026-02-02 17:29:43.997364519 +0000 UTC m=+0.135229138 container init 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:29:44 np0005605476 podman[145219]: 2026-02-02 17:29:44.002227781 +0000 UTC m=+0.140092370 container start 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:29:44 np0005605476 podman[145219]: 2026-02-02 17:29:44.005904411 +0000 UTC m=+0.143769000 container attach 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:29:44 np0005605476 systemd[1]: libpod-634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110.scope: Deactivated successfully.
Feb  2 12:29:44 np0005605476 heuristic_mahavira[145265]: 167 167
Feb  2 12:29:44 np0005605476 conmon[145265]: conmon 634b91aa86b9f98ed880 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110.scope/container/memory.events
Feb  2 12:29:44 np0005605476 podman[145219]: 2026-02-02 17:29:44.007634778 +0000 UTC m=+0.145499377 container died 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:29:44 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fda67827302323aa761de235dbe11ac955403a21d997ae17f7b9c225db32ef98-merged.mount: Deactivated successfully.
Feb  2 12:29:44 np0005605476 podman[145219]: 2026-02-02 17:29:44.04964649 +0000 UTC m=+0.187511099 container remove 634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:29:44 np0005605476 systemd[1]: libpod-conmon-634b91aa86b9f98ed880de3403686ad5c936c8934b7fa95057d408ec4e369110.scope: Deactivated successfully.
Feb  2 12:29:44 np0005605476 python3.9[145262]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:44 np0005605476 podman[145287]: 2026-02-02 17:29:44.183734215 +0000 UTC m=+0.047058370 container create 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:29:44 np0005605476 systemd[1]: Started libpod-conmon-9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb.scope.
Feb  2 12:29:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb6493523646afe42fa99a00881114fa5a718e162770207561e0679ac435b72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb6493523646afe42fa99a00881114fa5a718e162770207561e0679ac435b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb6493523646afe42fa99a00881114fa5a718e162770207561e0679ac435b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb6493523646afe42fa99a00881114fa5a718e162770207561e0679ac435b72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:44 np0005605476 podman[145287]: 2026-02-02 17:29:44.163238508 +0000 UTC m=+0.026562733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:44 np0005605476 podman[145287]: 2026-02-02 17:29:44.279199151 +0000 UTC m=+0.142523376 container init 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:29:44 np0005605476 podman[145287]: 2026-02-02 17:29:44.285582244 +0000 UTC m=+0.148906419 container start 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:29:44 np0005605476 podman[145287]: 2026-02-02 17:29:44.289232173 +0000 UTC m=+0.152556318 container attach 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:29:44 np0005605476 python3.9[145384]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]: {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    "0": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "devices": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "/dev/loop3"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            ],
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_name": "ceph_lv0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_size": "21470642176",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "name": "ceph_lv0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "tags": {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_name": "ceph",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.crush_device_class": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.encrypted": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.objectstore": "bluestore",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_id": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.vdo": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.with_tpm": "0"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            },
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "vg_name": "ceph_vg0"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        }
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    ],
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    "1": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "devices": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "/dev/loop4"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            ],
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_name": "ceph_lv1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_size": "21470642176",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "name": "ceph_lv1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "tags": {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_name": "ceph",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.crush_device_class": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.encrypted": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.objectstore": "bluestore",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_id": "1",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.vdo": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.with_tpm": "0"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            },
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "vg_name": "ceph_vg1"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        }
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    ],
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    "2": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "devices": [
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "/dev/loop5"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            ],
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_name": "ceph_lv2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_size": "21470642176",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "name": "ceph_lv2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "tags": {
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.cluster_name": "ceph",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.crush_device_class": "",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.encrypted": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.objectstore": "bluestore",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osd_id": "2",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.vdo": "0",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:                "ceph.with_tpm": "0"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            },
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "type": "block",
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:            "vg_name": "ceph_vg2"
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:        }
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]:    ]
Feb  2 12:29:44 np0005605476 dazzling_hypatia[145327]: }
Feb  2 12:29:44 np0005605476 systemd[1]: libpod-9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb.scope: Deactivated successfully.
Feb  2 12:29:44 np0005605476 podman[145412]: 2026-02-02 17:29:44.589696102 +0000 UTC m=+0.022269566 container died 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:29:44 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5fb6493523646afe42fa99a00881114fa5a718e162770207561e0679ac435b72-merged.mount: Deactivated successfully.
Feb  2 12:29:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:44 np0005605476 podman[145412]: 2026-02-02 17:29:44.628546669 +0000 UTC m=+0.061120133 container remove 9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:29:44 np0005605476 systemd[1]: libpod-conmon-9c7be14f5953cc564ffed69bad457c33a03bb1c86cb956fef722042466928fdb.scope: Deactivated successfully.
Feb  2 12:29:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.061706875 +0000 UTC m=+0.038796546 container create 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 12:29:45 np0005605476 systemd[1]: Started libpod-conmon-50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c.scope.
Feb  2 12:29:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.125213861 +0000 UTC m=+0.102303612 container init 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.129556349 +0000 UTC m=+0.106646050 container start 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:29:45 np0005605476 sleepy_wilson[145633]: 167 167
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.134032631 +0000 UTC m=+0.111122342 container attach 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:29:45 np0005605476 systemd[1]: libpod-50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c.scope: Deactivated successfully.
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.134760881 +0000 UTC m=+0.111850592 container died 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.044951239 +0000 UTC m=+0.022040950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:45 np0005605476 python3.9[145602]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770053384.5571673-574-2751228725592/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-eafa6b0262b7bff6ea19958dd5c1a9ddb290cf73c0f0209b192dff3b9f936637-merged.mount: Deactivated successfully.
Feb  2 12:29:45 np0005605476 podman[145616]: 2026-02-02 17:29:45.172034804 +0000 UTC m=+0.149124495 container remove 50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:29:45 np0005605476 systemd[1]: libpod-conmon-50ecd6b3813a51172c573e338a929b3a821625b3b55c53c23c3e3531bff0622c.scope: Deactivated successfully.
Feb  2 12:29:45 np0005605476 podman[145681]: 2026-02-02 17:29:45.287279037 +0000 UTC m=+0.037184412 container create c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:29:45 np0005605476 systemd[1]: Started libpod-conmon-c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be.scope.
Feb  2 12:29:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c30d45028fb5f39c21a954059a397e59b35c2102ca0ad8171bb7677455c5e8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c30d45028fb5f39c21a954059a397e59b35c2102ca0ad8171bb7677455c5e8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c30d45028fb5f39c21a954059a397e59b35c2102ca0ad8171bb7677455c5e8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c30d45028fb5f39c21a954059a397e59b35c2102ca0ad8171bb7677455c5e8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:45 np0005605476 podman[145681]: 2026-02-02 17:29:45.27230008 +0000 UTC m=+0.022205475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:29:45 np0005605476 podman[145681]: 2026-02-02 17:29:45.39517077 +0000 UTC m=+0.145076165 container init c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:29:45 np0005605476 podman[145681]: 2026-02-02 17:29:45.402035977 +0000 UTC m=+0.151941362 container start c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:29:45 np0005605476 podman[145681]: 2026-02-02 17:29:45.405609324 +0000 UTC m=+0.155514729 container attach c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:29:45 np0005605476 python3.9[145753]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:29:45 np0005605476 systemd[1]: Reloading.
Feb  2 12:29:45 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:29:45 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:29:45 np0005605476 lvm[145865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:29:45 np0005605476 lvm[145865]: VG ceph_vg0 finished
Feb  2 12:29:45 np0005605476 lvm[145866]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:29:45 np0005605476 lvm[145866]: VG ceph_vg1 finished
Feb  2 12:29:45 np0005605476 lvm[145868]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:29:45 np0005605476 lvm[145868]: VG ceph_vg2 finished
Feb  2 12:29:46 np0005605476 adoring_hermann[145722]: {}
Feb  2 12:29:46 np0005605476 systemd[1]: libpod-c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be.scope: Deactivated successfully.
Feb  2 12:29:46 np0005605476 podman[145681]: 2026-02-02 17:29:46.108249565 +0000 UTC m=+0.858154980 container died c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:29:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7c30d45028fb5f39c21a954059a397e59b35c2102ca0ad8171bb7677455c5e8f-merged.mount: Deactivated successfully.
Feb  2 12:29:46 np0005605476 podman[145681]: 2026-02-02 17:29:46.157246357 +0000 UTC m=+0.907151752 container remove c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:29:46 np0005605476 systemd[1]: libpod-conmon-c2a23aed592a52d568ba5854554270a33f8bb0dc3101d0e7336fccbe075905be.scope: Deactivated successfully.
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:46 np0005605476 python3.9[145960]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:29:46 np0005605476 systemd[1]: Reloading.
Feb  2 12:29:46 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:29:46 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:46 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:29:46 np0005605476 systemd[1]: Starting ovn_controller container...
Feb  2 12:29:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:29:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4de35ad4e032854b890938b202cbce85adbe4d8ddcb341dbb469a9938ddbc5/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb  2 12:29:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:46 np0005605476 systemd[1]: Started /usr/bin/podman healthcheck run 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783.
Feb  2 12:29:46 np0005605476 podman[146026]: 2026-02-02 17:29:46.875525905 +0000 UTC m=+0.100172275 container init 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:29:46 np0005605476 ovn_controller[146041]: + sudo -E kolla_set_configs
Feb  2 12:29:46 np0005605476 podman[146026]: 2026-02-02 17:29:46.897724498 +0000 UTC m=+0.122370828 container start 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Feb  2 12:29:46 np0005605476 edpm-start-podman-container[146026]: ovn_controller
Feb  2 12:29:46 np0005605476 systemd[1]: Created slice User Slice of UID 0.
Feb  2 12:29:46 np0005605476 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb  2 12:29:46 np0005605476 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb  2 12:29:46 np0005605476 edpm-start-podman-container[146025]: Creating additional drop-in dependency for "ovn_controller" (70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783)
Feb  2 12:29:46 np0005605476 systemd[1]: Starting User Manager for UID 0...
Feb  2 12:29:46 np0005605476 systemd[1]: Reloading.
Feb  2 12:29:46 np0005605476 podman[146048]: 2026-02-02 17:29:46.982831632 +0000 UTC m=+0.075899195 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:29:47 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:29:47 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:29:47 np0005605476 systemd[146076]: Queued start job for default target Main User Target.
Feb  2 12:29:47 np0005605476 systemd[146076]: Created slice User Application Slice.
Feb  2 12:29:47 np0005605476 systemd[146076]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb  2 12:29:47 np0005605476 systemd[146076]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 12:29:47 np0005605476 systemd[146076]: Reached target Paths.
Feb  2 12:29:47 np0005605476 systemd[146076]: Reached target Timers.
Feb  2 12:29:47 np0005605476 systemd[146076]: Starting D-Bus User Message Bus Socket...
Feb  2 12:29:47 np0005605476 systemd[146076]: Starting Create User's Volatile Files and Directories...
Feb  2 12:29:47 np0005605476 systemd[146076]: Finished Create User's Volatile Files and Directories.
Feb  2 12:29:47 np0005605476 systemd[146076]: Listening on D-Bus User Message Bus Socket.
Feb  2 12:29:47 np0005605476 systemd[146076]: Reached target Sockets.
Feb  2 12:29:47 np0005605476 systemd[146076]: Reached target Basic System.
Feb  2 12:29:47 np0005605476 systemd[146076]: Reached target Main User Target.
Feb  2 12:29:47 np0005605476 systemd[146076]: Startup finished in 136ms.
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:29:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:29:47 np0005605476 systemd[1]: Started User Manager for UID 0.
Feb  2 12:29:47 np0005605476 systemd[1]: Started ovn_controller container.
Feb  2 12:29:47 np0005605476 systemd[1]: 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783-2c712ae2bc69d601.service: Main process exited, code=exited, status=1/FAILURE
Feb  2 12:29:47 np0005605476 systemd[1]: 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783-2c712ae2bc69d601.service: Failed with result 'exit-code'.
Feb  2 12:29:47 np0005605476 systemd[1]: Started Session c1 of User root.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: INFO:__main__:Validating config file
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: INFO:__main__:Writing out command to execute
Feb  2 12:29:47 np0005605476 systemd[1]: session-c1.scope: Deactivated successfully.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: ++ cat /run_command
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + ARGS=
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + sudo kolla_copy_cacerts
Feb  2 12:29:47 np0005605476 systemd[1]: Started Session c2 of User root.
Feb  2 12:29:47 np0005605476 systemd[1]: session-c2.scope: Deactivated successfully.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + [[ ! -n '' ]]
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + . kolla_extend_start
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + umask 0022
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3463] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3473] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <warn>  [1770053387.3475] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3486] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3495] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3501] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 12:29:47 np0005605476 kernel: br-int: entered promiscuous mode
Feb  2 12:29:47 np0005605476 systemd-udevd[145864]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 12:29:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:29:47Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3716] manager: (ovn-9381f8-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb  2 12:29:47 np0005605476 kernel: genev_sys_6081: entered promiscuous mode
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3871] device (genev_sys_6081): carrier: link connected
Feb  2 12:29:47 np0005605476 NetworkManager[49022]: <info>  [1770053387.3875] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb  2 12:29:48 np0005605476 python3.9[146307]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 12:29:48 np0005605476 python3.9[146459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:29:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:49 np0005605476 python3.9[146582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053388.4449053-619-53972922868637/.source.yaml _original_basename=.45vv9rav follow=False checksum=43c869df0f980a472f943422008c6c6b187a1ff1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:29:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:50 np0005605476 python3.9[146734]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:50 np0005605476 ovs-vsctl[146735]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb  2 12:29:50 np0005605476 python3.9[146887]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:50 np0005605476 ovs-vsctl[146889]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb  2 12:29:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:51 np0005605476 python3.9[147042]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:29:51 np0005605476 ovs-vsctl[147043]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb  2 12:29:51 np0005605476 systemd[1]: session-45.scope: Deactivated successfully.
Feb  2 12:29:51 np0005605476 systemd[1]: session-45.scope: Consumed 48.177s CPU time.
Feb  2 12:29:51 np0005605476 systemd-logind[799]: Session 45 logged out. Waiting for processes to exit.
Feb  2 12:29:51 np0005605476 systemd-logind[799]: Removed session 45.
Feb  2 12:29:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:57 np0005605476 systemd-logind[799]: New session 47 of user zuul.
Feb  2 12:29:57 np0005605476 systemd[1]: Started Session 47 of User zuul.
Feb  2 12:29:57 np0005605476 systemd[1]: Stopping User Manager for UID 0...
Feb  2 12:29:57 np0005605476 systemd[146076]: Activating special unit Exit the Session...
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped target Main User Target.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped target Basic System.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped target Paths.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped target Sockets.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped target Timers.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 12:29:57 np0005605476 systemd[146076]: Closed D-Bus User Message Bus Socket.
Feb  2 12:29:57 np0005605476 systemd[146076]: Stopped Create User's Volatile Files and Directories.
Feb  2 12:29:57 np0005605476 systemd[146076]: Removed slice User Application Slice.
Feb  2 12:29:57 np0005605476 systemd[146076]: Reached target Shutdown.
Feb  2 12:29:57 np0005605476 systemd[146076]: Finished Exit the Session.
Feb  2 12:29:57 np0005605476 systemd[146076]: Reached target Exit the Session.
Feb  2 12:29:57 np0005605476 systemd[1]: user@0.service: Deactivated successfully.
Feb  2 12:29:57 np0005605476 systemd[1]: Stopped User Manager for UID 0.
Feb  2 12:29:57 np0005605476 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb  2 12:29:57 np0005605476 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb  2 12:29:57 np0005605476 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb  2 12:29:57 np0005605476 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb  2 12:29:57 np0005605476 systemd[1]: Removed slice User Slice of UID 0.
Feb  2 12:29:58 np0005605476 python3.9[147223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:29:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:29:59 np0005605476 python3.9[147379]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:29:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:29:59 np0005605476 python3.9[147531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:00 np0005605476 python3.9[147683]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:00 np0005605476 python3.9[147835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:01 np0005605476 python3.9[147987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:02 np0005605476 python3.9[148137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:30:02 np0005605476 python3.9[148289]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 12:30:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:04 np0005605476 python3.9[148439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:04 np0005605476 python3.9[148560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053403.540325-81-141573074536465/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:05 np0005605476 python3.9[148710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:05 np0005605476 python3.9[148832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053404.7838461-96-79413972148249/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:06 np0005605476 python3.9[148984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:30:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:07 np0005605476 python3.9[149068]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:09 np0005605476 python3.9[149221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:30:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:10 np0005605476 python3.9[149374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:10 np0005605476 python3.9[149495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053409.8330102-133-27462143384970/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:11 np0005605476 python3.9[149645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:11 np0005605476 python3.9[149766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053410.7810538-133-115447850471367/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:12 np0005605476 python3.9[149916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:13 np0005605476 python3.9[150037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053412.3335207-177-279908583268278/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:13 np0005605476 python3.9[150187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:14 np0005605476 python3.9[150308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053413.2707753-177-154931236556759/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:14 np0005605476 python3.9[150458]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:30:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:15 np0005605476 python3.9[150612]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:15 np0005605476 python3.9[150764]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:16 np0005605476 python3.9[150842]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:16 np0005605476 python3.9[150994]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:16 np0005605476 python3.9[151072]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:30:17Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Feb  2 12:30:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:30:17Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb  2 12:30:17 np0005605476 podman[151196]: 2026-02-02 17:30:17.432183905 +0000 UTC m=+0.091626642 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:30:17 np0005605476 python3.9[151243]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:18 np0005605476 python3.9[151402]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:18 np0005605476 python3.9[151480]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:30:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2065 writes, 9154 keys, 2065 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2065 writes, 2065 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2065 writes, 9154 keys, 2065 commit groups, 1.0 writes per commit group, ingest: 12.32 MB, 0.02 MB/s#012Interval WAL: 2065 writes, 2065 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    186.7      0.05              0.02         3    0.016       0      0       0.0       0.0#012  L6      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    193.2    169.0      0.08              0.03         2    0.042    7210    732       0.0       0.0#012 Sum      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    123.8    175.4      0.13              0.04         5    0.026    7210    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    126.6    178.9      0.13              0.04         4    0.032    7210    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    193.2    169.0      0.08              0.03         2    0.042    7210    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    197.5      0.04              0.02         2    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 308.00 MB usage: 706.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,619.19 KB,0.196323%) FilterBlock(6,28.36 KB,0.00899179%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 12:30:19 np0005605476 python3.9[151632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:19 np0005605476 python3.9[151710]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:20 np0005605476 python3.9[151862]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:30:20 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:20 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:20 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:21 np0005605476 python3.9[152051]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:21 np0005605476 python3.9[152129]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:22 np0005605476 python3.9[152281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:22 np0005605476 python3.9[152359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:23 np0005605476 python3.9[152511]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:30:23 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:23 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:23 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:23 np0005605476 systemd[1]: Starting Create netns directory...
Feb  2 12:30:23 np0005605476 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 12:30:23 np0005605476 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 12:30:23 np0005605476 systemd[1]: Finished Create netns directory.
Feb  2 12:30:24 np0005605476 python3.9[152704]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:24 np0005605476 python3.9[152856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:25 np0005605476 python3.9[152979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053424.3352315-328-111647750830296/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:26 np0005605476 python3.9[153131]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:26 np0005605476 python3.9[153283]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:30:27 np0005605476 python3.9[153435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:28 np0005605476 python3.9[153558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053427.149728-361-150215999820130/.source.json _original_basename=.lnvhi02m follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:28 np0005605476 python3.9[153708]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:30 np0005605476 python3.9[154131]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb  2 12:30:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:31 np0005605476 python3.9[154283]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 12:30:32 np0005605476 python3[154435]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 12:30:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:30:36
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:30:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:30:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:30:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.315940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440316024, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 784, "num_deletes": 251, "total_data_size": 1051067, "memory_usage": 1067944, "flush_reason": "Manual Compaction"}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440383788, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1041820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8926, "largest_seqno": 9709, "table_properties": {"data_size": 1037826, "index_size": 1774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8502, "raw_average_key_size": 18, "raw_value_size": 1029804, "raw_average_value_size": 2253, "num_data_blocks": 82, "num_entries": 457, "num_filter_entries": 457, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053370, "oldest_key_time": 1770053370, "file_creation_time": 1770053440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 67893 microseconds, and 3785 cpu microseconds.
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.383850) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1041820 bytes OK
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.383876) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.399121) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.399154) EVENT_LOG_v1 {"time_micros": 1770053440399145, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.399184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1047128, prev total WAL file size 1047128, number of live WAL files 2.
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.399737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1017KB)], [23(6918KB)]
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440400026, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8126746, "oldest_snapshot_seqno": -1}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3306 keys, 6315274 bytes, temperature: kUnknown
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440447864, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6315274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6291103, "index_size": 14782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80108, "raw_average_key_size": 24, "raw_value_size": 6229225, "raw_average_value_size": 1884, "num_data_blocks": 646, "num_entries": 3306, "num_filter_entries": 3306, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770053440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.448097) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6315274 bytes
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.454324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.1 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.8 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.1) OK, records in: 3820, records dropped: 514 output_compression: NoCompression
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.454342) EVENT_LOG_v1 {"time_micros": 1770053440454333, "job": 8, "event": "compaction_finished", "compaction_time_micros": 47231, "compaction_time_cpu_micros": 14051, "output_level": 6, "num_output_files": 1, "total_output_size": 6315274, "num_input_records": 3820, "num_output_records": 3306, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440454502, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053440454946, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.399609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.455017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.455025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.455029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.455033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:30:40.455036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:30:40 np0005605476 podman[154448]: 2026-02-02 17:30:40.526904774 +0000 UTC m=+8.014291686 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:30:40 np0005605476 podman[154571]: 2026-02-02 17:30:40.692179567 +0000 UTC m=+0.084345154 container create 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 12:30:40 np0005605476 podman[154571]: 2026-02-02 17:30:40.626959024 +0000 UTC m=+0.019124661 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:30:40 np0005605476 python3[154435]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:30:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:41 np0005605476 python3.9[154761]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:30:42 np0005605476 python3.9[154915]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:42 np0005605476 python3.9[154991]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:30:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:43 np0005605476 python3.9[155142]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770053442.547595-439-88203685391038/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:43 np0005605476 python3.9[155218]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:30:43 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:43 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:43 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:44 np0005605476 python3.9[155329]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:30:44 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:44 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:44 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:44 np0005605476 systemd[1]: Starting ovn_metadata_agent container...
Feb  2 12:30:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342c35df542ed96dbd206211c1cb8b1fbfcf5f53e46a9d2541015e043cad3895/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342c35df542ed96dbd206211c1cb8b1fbfcf5f53e46a9d2541015e043cad3895/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:44 np0005605476 systemd[1]: Started /usr/bin/podman healthcheck run 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b.
Feb  2 12:30:44 np0005605476 podman[155370]: 2026-02-02 17:30:44.833167644 +0000 UTC m=+0.125903374 container init 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + sudo -E kolla_set_configs
Feb  2 12:30:44 np0005605476 podman[155370]: 2026-02-02 17:30:44.86171804 +0000 UTC m=+0.154453730 container start 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  2 12:30:44 np0005605476 edpm-start-podman-container[155370]: ovn_metadata_agent
Feb  2 12:30:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Validating config file
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Copying service configuration files
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Writing out command to execute
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb  2 12:30:44 np0005605476 edpm-start-podman-container[155369]: Creating additional drop-in dependency for "ovn_metadata_agent" (983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b)
Feb  2 12:30:44 np0005605476 podman[155393]: 2026-02-02 17:30:44.9257153 +0000 UTC m=+0.055867740 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: ++ cat /run_command
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + CMD=neutron-ovn-metadata-agent
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + ARGS=
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + sudo kolla_copy_cacerts
Feb  2 12:30:44 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: Running command: 'neutron-ovn-metadata-agent'
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + [[ ! -n '' ]]
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + . kolla_extend_start
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + umask 0022
Feb  2 12:30:44 np0005605476 ovn_metadata_agent[155386]: + exec neutron-ovn-metadata-agent
Feb  2 12:30:45 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:45 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:45 np0005605476 systemd[1]: Started ovn_metadata_agent container.
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.570 155391 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.571 155391 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.572 155391 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.572 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.572 155391 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.572 155391 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.573 155391 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.574 155391 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.575 155391 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.576 155391 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.577 155391 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.578 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.579 155391 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.580 155391 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.581 155391 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.582 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.583 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.584 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.585 155391 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.586 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.587 155391 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.588 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.589 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.590 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.591 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.592 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.593 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.593 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.593 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.593 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.593 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.594 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.595 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.596 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.596 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.596 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.596 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.596 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.597 155391 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.598 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.599 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.600 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.601 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.602 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.603 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.604 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.605 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.606 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.606 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.606 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.606 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.606 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.607 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.608 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.609 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.610 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.611 155391 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.612 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.613 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.614 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.615 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.616 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.616 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.616 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.616 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.616 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.617 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.618 155391 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.618 155391 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.626 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.627 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.627 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.627 155391 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.627 155391 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.641 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 13051b64-c07e-4136-ad5c-993d3a84d93c (UUID: 13051b64-c07e-4136-ad5c-993d3a84d93c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.666 155391 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.666 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.666 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.667 155391 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.669 155391 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.674 155391 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.679 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '13051b64-c07e-4136-ad5c-993d3a84d93c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], external_ids={}, name=13051b64-c07e-4136-ad5c-993d3a84d93c, nb_cfg_timestamp=1770053395372, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.680 155391 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fc771db9fd0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.681 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.681 155391 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.681 155391 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.682 155391 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.685 155391 DEBUG oslo_service.service [-] Started child 155696 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.687 155391 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpty0fqf9d/privsep.sock']#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.688 155696 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4032981'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.709 155696 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.709 155696 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.710 155696 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.712 155696 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.720 155696 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 12:30:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:46.727 155696 INFO eventlet.wsgi.server [-] (155696) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb  2 12:30:46 np0005605476 python3.9[155622]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:30:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:30:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.169437248 +0000 UTC m=+0.047571058 container create 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:30:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:30:47 np0005605476 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb  2 12:30:47 np0005605476 systemd[1]: Started libpod-conmon-0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc.scope.
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.144176858 +0000 UTC m=+0.022310698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.258002917 +0000 UTC m=+0.136136747 container init 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.263869391 +0000 UTC m=+0.142003201 container start 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.272915856 +0000 UTC m=+0.151049696 container attach 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:30:47 np0005605476 competent_banach[155892]: 167 167
Feb  2 12:30:47 np0005605476 systemd[1]: libpod-0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc.scope: Deactivated successfully.
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.277758412 +0000 UTC m=+0.155892222 container died 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:30:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5c999c756731d8e6fd0dff677d461abb569c8cd1b627f09d830c857b67dce95c-merged.mount: Deactivated successfully.
Feb  2 12:30:47 np0005605476 podman[155826]: 2026-02-02 17:30:47.311046447 +0000 UTC m=+0.189180257 container remove 0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:30:47 np0005605476 systemd[1]: libpod-conmon-0889f2258a0cf4b2c8277d5ab1a9121d5220cccc690beca614f17629cd54f9bc.scope: Deactivated successfully.
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.331 155391 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.332 155391 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpty0fqf9d/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.210 155891 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.213 155891 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.215 155891 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.215 155891 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155891#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.335 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa0253c-4734-418c-b8ac-901185f10639]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.432447049 +0000 UTC m=+0.043516764 container create 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:30:47 np0005605476 python3.9[155962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:30:47 np0005605476 systemd[1]: Started libpod-conmon-8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f.scope.
Feb  2 12:30:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.413005533 +0000 UTC m=+0.024075278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.526743079 +0000 UTC m=+0.137812804 container init 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.531814272 +0000 UTC m=+0.142883987 container start 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:30:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:30:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:47 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.537530792 +0000 UTC m=+0.148600507 container attach 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:30:47 np0005605476 podman[155985]: 2026-02-02 17:30:47.563822181 +0000 UTC m=+0.098335724 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.787 155891 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.787 155891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:30:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:47.787 155891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:30:47 np0005605476 python3.9[156144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053447.0988772-484-68449553368957/.source.yaml _original_basename=.oaz6jj0n follow=False checksum=1a3f6194ad4cafbd3bda0dd8d9e24cd78970ee5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:30:47 np0005605476 exciting_ardinghelli[155997]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:30:47 np0005605476 exciting_ardinghelli[155997]: --> All data devices are unavailable
Feb  2 12:30:47 np0005605476 podman[155971]: 2026-02-02 17:30:47.986471019 +0000 UTC m=+0.597540754 container died 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:30:47 np0005605476 systemd[1]: libpod-8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f.scope: Deactivated successfully.
Feb  2 12:30:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4d2743cc1591a154dc55a74c30e0ca03f656dfd69dc90cd0831627f01b6ceb49-merged.mount: Deactivated successfully.
Feb  2 12:30:48 np0005605476 podman[155971]: 2026-02-02 17:30:48.034129119 +0000 UTC m=+0.645198834 container remove 8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:30:48 np0005605476 systemd[1]: libpod-conmon-8512800c276c4db399811014680d478ccd6bcb456c1e1c25f9db4b7f5433e52f.scope: Deactivated successfully.
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.259 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[003c9226-3751-4c90-b18d-d143b4056d5a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.263 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, column=external_ids, values=({'neutron:ovn-metadata-id': '31e13ddf-1bfb-5b00-8e7d-4019580a7bd7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.277 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.283 155391 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.284 155391 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.285 155391 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.286 155391 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.287 155391 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.288 155391 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.289 155391 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.290 155391 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.291 155391 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.292 155391 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.293 155391 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.294 155391 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.295 155391 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.296 155391 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.297 155391 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.298 155391 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.299 155391 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.300 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.301 155391 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.302 155391 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.303 155391 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.304 155391 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.305 155391 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.306 155391 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.307 155391 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.308 155391 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.309 155391 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.310 155391 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.311 155391 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.312 155391 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.313 155391 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.314 155391 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.315 155391 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.316 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.317 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.318 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.319 155391 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.320 155391 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.320 155391 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:30:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:30:48.320 155391 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 12:30:48 np0005605476 systemd-logind[799]: Session 47 logged out. Waiting for processes to exit.
Feb  2 12:30:48 np0005605476 systemd[1]: session-47.scope: Deactivated successfully.
Feb  2 12:30:48 np0005605476 systemd[1]: session-47.scope: Consumed 46.332s CPU time.
Feb  2 12:30:48 np0005605476 systemd-logind[799]: Removed session 47.
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.406116223 +0000 UTC m=+0.043607467 container create f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:30:48 np0005605476 systemd[1]: Started libpod-conmon-f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a.scope.
Feb  2 12:30:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.383691523 +0000 UTC m=+0.021182807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.487189181 +0000 UTC m=+0.124680535 container init f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.496430801 +0000 UTC m=+0.133922075 container start f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:30:48 np0005605476 busy_yalow[156272]: 167 167
Feb  2 12:30:48 np0005605476 systemd[1]: libpod-f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a.scope: Deactivated successfully.
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.49996625 +0000 UTC m=+0.137457604 container attach f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.501132313 +0000 UTC m=+0.138623597 container died f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:30:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b49800278f3ea21e8e3dcf66f854facb2784583a9f093b482cc9b0e9f5c763e9-merged.mount: Deactivated successfully.
Feb  2 12:30:48 np0005605476 podman[156256]: 2026-02-02 17:30:48.536628991 +0000 UTC m=+0.174120245 container remove f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:30:48 np0005605476 systemd[1]: libpod-conmon-f0c11ba0e32e4d1b3623d49d5bf74676662b64bc3963aa865170fbe360bdac4a.scope: Deactivated successfully.
Feb  2 12:30:48 np0005605476 podman[156295]: 2026-02-02 17:30:48.672629533 +0000 UTC m=+0.047876807 container create e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:30:48 np0005605476 systemd[1]: Started libpod-conmon-e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a.scope.
Feb  2 12:30:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:48 np0005605476 podman[156295]: 2026-02-02 17:30:48.648141805 +0000 UTC m=+0.023389129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e459e89a4fb338ca1c7eb0292737cbfde62b917ac3fe078e805aee7629d580b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e459e89a4fb338ca1c7eb0292737cbfde62b917ac3fe078e805aee7629d580b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e459e89a4fb338ca1c7eb0292737cbfde62b917ac3fe078e805aee7629d580b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e459e89a4fb338ca1c7eb0292737cbfde62b917ac3fe078e805aee7629d580b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:48 np0005605476 podman[156295]: 2026-02-02 17:30:48.763525157 +0000 UTC m=+0.138772491 container init e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:30:48 np0005605476 podman[156295]: 2026-02-02 17:30:48.769174016 +0000 UTC m=+0.144421280 container start e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:30:48 np0005605476 podman[156295]: 2026-02-02 17:30:48.773189839 +0000 UTC m=+0.148437103 container attach e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:30:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:49 np0005605476 recursing_napier[156312]: {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    "0": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "devices": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "/dev/loop3"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            ],
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_name": "ceph_lv0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_size": "21470642176",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "name": "ceph_lv0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "tags": {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_name": "ceph",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.crush_device_class": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.encrypted": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.objectstore": "bluestore",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_id": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.vdo": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.with_tpm": "0"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            },
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "vg_name": "ceph_vg0"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        }
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    ],
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    "1": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "devices": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "/dev/loop4"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            ],
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_name": "ceph_lv1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_size": "21470642176",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "name": "ceph_lv1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "tags": {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_name": "ceph",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.crush_device_class": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.encrypted": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.objectstore": "bluestore",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_id": "1",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.vdo": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.with_tpm": "0"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            },
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "vg_name": "ceph_vg1"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        }
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    ],
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    "2": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "devices": [
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "/dev/loop5"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            ],
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_name": "ceph_lv2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_size": "21470642176",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "name": "ceph_lv2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "tags": {
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.cluster_name": "ceph",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.crush_device_class": "",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.encrypted": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.objectstore": "bluestore",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osd_id": "2",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.vdo": "0",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:                "ceph.with_tpm": "0"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            },
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "type": "block",
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:            "vg_name": "ceph_vg2"
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:        }
Feb  2 12:30:49 np0005605476 recursing_napier[156312]:    ]
Feb  2 12:30:49 np0005605476 recursing_napier[156312]: }
Feb  2 12:30:49 np0005605476 systemd[1]: libpod-e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a.scope: Deactivated successfully.
Feb  2 12:30:49 np0005605476 podman[156295]: 2026-02-02 17:30:49.069411224 +0000 UTC m=+0.444658468 container died e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:30:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3e459e89a4fb338ca1c7eb0292737cbfde62b917ac3fe078e805aee7629d580b-merged.mount: Deactivated successfully.
Feb  2 12:30:49 np0005605476 podman[156295]: 2026-02-02 17:30:49.104531941 +0000 UTC m=+0.479779175 container remove e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:30:49 np0005605476 systemd[1]: libpod-conmon-e87e057f5d891c0290c45bdeb95f1ac6ce75be2e2289f2c1b60306980dff671a.scope: Deactivated successfully.
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.522819526 +0000 UTC m=+0.041511437 container create 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:30:49 np0005605476 systemd[1]: Started libpod-conmon-2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46.scope.
Feb  2 12:30:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.592718681 +0000 UTC m=+0.111410632 container init 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.599961634 +0000 UTC m=+0.118653545 container start 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.504757709 +0000 UTC m=+0.023449670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.603486113 +0000 UTC m=+0.122178084 container attach 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:30:49 np0005605476 stoic_golick[156413]: 167 167
Feb  2 12:30:49 np0005605476 systemd[1]: libpod-2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46.scope: Deactivated successfully.
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.604885632 +0000 UTC m=+0.123577543 container died 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:30:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b59eb4c9441290be07bda9b5b5f68ac927f7a50954dfbd8521b12ebb2be680e4-merged.mount: Deactivated successfully.
Feb  2 12:30:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:49 np0005605476 podman[156396]: 2026-02-02 17:30:49.642381516 +0000 UTC m=+0.161073457 container remove 2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:30:49 np0005605476 systemd[1]: libpod-conmon-2e066c4ab4a3c76ba8ef18e0b0074c40a6fbfe91305c6300e708c5c85fcaed46.scope: Deactivated successfully.
Feb  2 12:30:49 np0005605476 podman[156436]: 2026-02-02 17:30:49.781671311 +0000 UTC m=+0.042904267 container create 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb  2 12:30:49 np0005605476 systemd[1]: Started libpod-conmon-9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658.scope.
Feb  2 12:30:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:30:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd2a649602e0510c46275d61e8f7e3b73cb54d79903a55e57909d975473f1e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd2a649602e0510c46275d61e8f7e3b73cb54d79903a55e57909d975473f1e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd2a649602e0510c46275d61e8f7e3b73cb54d79903a55e57909d975473f1e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd2a649602e0510c46275d61e8f7e3b73cb54d79903a55e57909d975473f1e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:30:49 np0005605476 podman[156436]: 2026-02-02 17:30:49.853948452 +0000 UTC m=+0.115181388 container init 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:30:49 np0005605476 podman[156436]: 2026-02-02 17:30:49.761291178 +0000 UTC m=+0.022524154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:30:49 np0005605476 podman[156436]: 2026-02-02 17:30:49.860728943 +0000 UTC m=+0.121961879 container start 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:30:49 np0005605476 podman[156436]: 2026-02-02 17:30:49.863543762 +0000 UTC m=+0.124776718 container attach 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:30:50 np0005605476 lvm[156528]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:30:50 np0005605476 lvm[156528]: VG ceph_vg0 finished
Feb  2 12:30:50 np0005605476 lvm[156531]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:30:50 np0005605476 lvm[156531]: VG ceph_vg1 finished
Feb  2 12:30:50 np0005605476 lvm[156533]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:30:50 np0005605476 lvm[156533]: VG ceph_vg2 finished
Feb  2 12:30:50 np0005605476 youthful_hugle[156452]: {}
Feb  2 12:30:50 np0005605476 systemd[1]: libpod-9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658.scope: Deactivated successfully.
Feb  2 12:30:50 np0005605476 podman[156436]: 2026-02-02 17:30:50.588770032 +0000 UTC m=+0.850002988 container died 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:30:50 np0005605476 systemd[1]: libpod-9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658.scope: Consumed 1.028s CPU time.
Feb  2 12:30:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bcd2a649602e0510c46275d61e8f7e3b73cb54d79903a55e57909d975473f1e7-merged.mount: Deactivated successfully.
Feb  2 12:30:50 np0005605476 podman[156436]: 2026-02-02 17:30:50.628034276 +0000 UTC m=+0.889267212 container remove 9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:30:50 np0005605476 systemd[1]: libpod-conmon-9930a82826aefe99e9ce732f45d95a895303b48ad26ac0f02e21236981ac4658.scope: Deactivated successfully.
Feb  2 12:30:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:30:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:30:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:51 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:30:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:53 np0005605476 systemd-logind[799]: New session 48 of user zuul.
Feb  2 12:30:53 np0005605476 systemd[1]: Started Session 48 of User zuul.
Feb  2 12:30:54 np0005605476 python3.9[156725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:30:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:30:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:55 np0005605476 python3.9[156881]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:30:56 np0005605476 python3.9[157046]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:30:56 np0005605476 systemd[1]: Reloading.
Feb  2 12:30:56 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:30:56 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:30:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:57 np0005605476 python3.9[157231]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:30:57 np0005605476 network[157248]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:30:57 np0005605476 network[157249]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:30:57 np0005605476 network[157250]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:30:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:30:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:01 np0005605476 python3.9[157512]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:02 np0005605476 python3.9[157665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:03 np0005605476 python3.9[157818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:03 np0005605476 python3.9[157971]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:04 np0005605476 python3.9[158124]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:05 np0005605476 python3.9[158277]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:06 np0005605476 python3.9[158430]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:31:06 np0005605476 python3.9[158583]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:07 np0005605476 python3.9[158735]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:08 np0005605476 python3.9[158887]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:08 np0005605476 python3.9[159039]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:09 np0005605476 python3.9[159191]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:09 np0005605476 python3.9[159343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:10 np0005605476 python3.9[159495]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:10 np0005605476 python3.9[159647]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:11 np0005605476 python3.9[159799]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:11 np0005605476 python3.9[159951]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:12 np0005605476 python3.9[160104]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:12 np0005605476 python3.9[160256]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:13 np0005605476 python3.9[160408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:14 np0005605476 python3.9[160560]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:31:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:14 np0005605476 python3.9[160712]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:15 np0005605476 podman[160838]: 2026-02-02 17:31:15.453045553 +0000 UTC m=+0.069683050 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 12:31:15 np0005605476 python3.9[160874]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:31:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:31:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5633 writes, 24K keys, 5633 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5633 writes, 904 syncs, 6.23 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5633 writes, 24K keys, 5633 commit groups, 1.0 writes per commit group, ingest: 18.64 MB, 0.03 MB/s#012Interval WAL: 5633 writes, 904 syncs, 6.23 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 12:31:16 np0005605476 python3.9[161035]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:31:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:31:16 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:31:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:31:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:17 np0005605476 python3.9[161221]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:17 np0005605476 podman[161346]: 2026-02-02 17:31:17.736318811 +0000 UTC m=+0.079666780 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 12:31:17 np0005605476 python3.9[161391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:18 np0005605476 python3.9[161554]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:18 np0005605476 python3.9[161707]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:19 np0005605476 python3.9[161860]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:31:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6971 writes, 29K keys, 6971 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6971 writes, 1356 syncs, 5.14 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6971 writes, 29K keys, 6971 commit groups, 1.0 writes per commit group, ingest: 19.76 MB, 0.03 MB/s#012Interval WAL: 6971 writes, 1356 syncs, 5.14 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  2 12:31:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:19 np0005605476 python3.9[162013]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:20 np0005605476 python3.9[162166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:31:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:21 np0005605476 python3.9[162319]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb  2 12:31:21 np0005605476 python3.9[162472]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:31:22 np0005605476 python3.9[162630]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 12:31:22 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:31:22 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:31:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:31:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5432 writes, 23K keys, 5432 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5432 writes, 803 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5432 writes, 23K keys, 5432 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s#012Interval WAL: 5432 writes, 803 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 12:31:23 np0005605476 python3.9[162791]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:31:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:24 np0005605476 python3.9[162875]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:31:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:25 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 12:31:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:31:36
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:31:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:31:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:31:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:45 np0005605476 podman[162938]: 2026-02-02 17:31:45.64748302 +0000 UTC m=+0.083970781 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb  2 12:31:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:31:46.620 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:31:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:31:46.620 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:31:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:31:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:31:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:31:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:31:48 np0005605476 podman[163029]: 2026-02-02 17:31:48.648633452 +0000 UTC m=+0.091701428 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:31:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:31:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.178672549 +0000 UTC m=+0.044307826 container create ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:31:52 np0005605476 systemd[1]: Started libpod-conmon-ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba.scope.
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:52 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:31:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.158169147 +0000 UTC m=+0.023804494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.254543031 +0000 UTC m=+0.120178308 container init ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.263177488 +0000 UTC m=+0.128812765 container start ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.26690996 +0000 UTC m=+0.132545277 container attach ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:31:52 np0005605476 heuristic_cohen[163387]: 167 167
Feb  2 12:31:52 np0005605476 systemd[1]: libpod-ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba.scope: Deactivated successfully.
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.27017886 +0000 UTC m=+0.135814127 container died ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:31:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-53a3a2233d55da10af5996058b5916c2ff73dacd87b841e941039feaece76122-merged.mount: Deactivated successfully.
Feb  2 12:31:52 np0005605476 podman[163370]: 2026-02-02 17:31:52.301918111 +0000 UTC m=+0.167553378 container remove ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_cohen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:31:52 np0005605476 systemd[1]: libpod-conmon-ee16e3af591aaeadd02c2931af4fb3deaddb2cee74d618cd3d2a81786df31aba.scope: Deactivated successfully.
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.424277728 +0000 UTC m=+0.048436870 container create 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:31:52 np0005605476 systemd[1]: Started libpod-conmon-8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5.scope.
Feb  2 12:31:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.499038359 +0000 UTC m=+0.123197491 container init 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.405952245 +0000 UTC m=+0.030111387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.509196818 +0000 UTC m=+0.133355930 container start 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.512130579 +0000 UTC m=+0.136289681 container attach 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:31:52 np0005605476 compassionate_villani[163429]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:31:52 np0005605476 compassionate_villani[163429]: --> All data devices are unavailable
Feb  2 12:31:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:52 np0005605476 systemd[1]: libpod-8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5.scope: Deactivated successfully.
Feb  2 12:31:52 np0005605476 podman[163412]: 2026-02-02 17:31:52.9772292 +0000 UTC m=+0.601388332 container died 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:31:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ce5cd5b1cf70072083950e0f69e30f9da3e8b2fb46154724cd59f3ebae7fa6ea-merged.mount: Deactivated successfully.
Feb  2 12:31:53 np0005605476 podman[163412]: 2026-02-02 17:31:53.019282284 +0000 UTC m=+0.643441396 container remove 8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:31:53 np0005605476 systemd[1]: libpod-conmon-8d2baa469dbadf724893b0bc7ff585c319536804f566cb9c1c732dc9ea5dfca5.scope: Deactivated successfully.
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.418853357 +0000 UTC m=+0.057114338 container create 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:31:53 np0005605476 systemd[1]: Started libpod-conmon-2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec.scope.
Feb  2 12:31:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.395930748 +0000 UTC m=+0.034191779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.500394985 +0000 UTC m=+0.138656026 container init 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.508043275 +0000 UTC m=+0.146304286 container start 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.511818138 +0000 UTC m=+0.150079129 container attach 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:31:53 np0005605476 upbeat_kalam[163542]: 167 167
Feb  2 12:31:53 np0005605476 systemd[1]: libpod-2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec.scope: Deactivated successfully.
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.513813793 +0000 UTC m=+0.152074804 container died 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:31:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-dc687cb1176d973d28e3d18cb9c346deb71c6dab74c5273bd7ce43f5daae4d4b-merged.mount: Deactivated successfully.
Feb  2 12:31:53 np0005605476 podman[163526]: 2026-02-02 17:31:53.549101171 +0000 UTC m=+0.187362182 container remove 2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:31:53 np0005605476 systemd[1]: libpod-conmon-2e87bf3c8ab0fcf1f9b2077141f7a6c2efe1a0f74258c8ee215aca96f559c2ec.scope: Deactivated successfully.
Feb  2 12:31:53 np0005605476 podman[163565]: 2026-02-02 17:31:53.73822025 +0000 UTC m=+0.056681326 container create 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:31:53 np0005605476 systemd[1]: Started libpod-conmon-864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f.scope.
Feb  2 12:31:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c00659880f4b508f8a75f6ce533083c4035083b2aab7f34de5ccb1a80a890f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c00659880f4b508f8a75f6ce533083c4035083b2aab7f34de5ccb1a80a890f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c00659880f4b508f8a75f6ce533083c4035083b2aab7f34de5ccb1a80a890f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c00659880f4b508f8a75f6ce533083c4035083b2aab7f34de5ccb1a80a890f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:53 np0005605476 podman[163565]: 2026-02-02 17:31:53.714964272 +0000 UTC m=+0.033425438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:53 np0005605476 podman[163565]: 2026-02-02 17:31:53.81948788 +0000 UTC m=+0.137948956 container init 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb  2 12:31:53 np0005605476 podman[163565]: 2026-02-02 17:31:53.825529736 +0000 UTC m=+0.143990842 container start 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:31:53 np0005605476 podman[163565]: 2026-02-02 17:31:53.829632538 +0000 UTC m=+0.148093614 container attach 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]: {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    "0": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "devices": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "/dev/loop3"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            ],
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_name": "ceph_lv0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_size": "21470642176",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "name": "ceph_lv0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "tags": {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_name": "ceph",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.crush_device_class": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.encrypted": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.objectstore": "bluestore",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_id": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.vdo": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.with_tpm": "0"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            },
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "vg_name": "ceph_vg0"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        }
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    ],
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    "1": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "devices": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "/dev/loop4"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            ],
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_name": "ceph_lv1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_size": "21470642176",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "name": "ceph_lv1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "tags": {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_name": "ceph",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.crush_device_class": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.encrypted": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.objectstore": "bluestore",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_id": "1",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.vdo": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.with_tpm": "0"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            },
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "vg_name": "ceph_vg1"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        }
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    ],
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    "2": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "devices": [
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "/dev/loop5"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            ],
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_name": "ceph_lv2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_size": "21470642176",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "name": "ceph_lv2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "tags": {
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.cluster_name": "ceph",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.crush_device_class": "",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.encrypted": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.objectstore": "bluestore",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osd_id": "2",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.vdo": "0",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:                "ceph.with_tpm": "0"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            },
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "type": "block",
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:            "vg_name": "ceph_vg2"
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:        }
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]:    ]
Feb  2 12:31:54 np0005605476 mystifying_allen[163581]: }
Feb  2 12:31:54 np0005605476 systemd[1]: libpod-864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f.scope: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163565]: 2026-02-02 17:31:54.114271818 +0000 UTC m=+0.432732904 container died 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb  2 12:31:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f6c00659880f4b508f8a75f6ce533083c4035083b2aab7f34de5ccb1a80a890f-merged.mount: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163565]: 2026-02-02 17:31:54.154893063 +0000 UTC m=+0.473354149 container remove 864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_allen, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:31:54 np0005605476 systemd[1]: libpod-conmon-864671946a3a25b06ffbf67b9e1d12f46bf091a7707635990f5dd0b2b292517f.scope: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.614076681 +0000 UTC m=+0.031941688 container create 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:31:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:31:54 np0005605476 systemd[1]: Started libpod-conmon-8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603.scope.
Feb  2 12:31:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.682491558 +0000 UTC m=+0.100356595 container init 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.690744384 +0000 UTC m=+0.108609401 container start 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:31:54 np0005605476 epic_hawking[163680]: 167 167
Feb  2 12:31:54 np0005605476 systemd[1]: libpod-8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603.scope: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.697793778 +0000 UTC m=+0.115658825 container attach 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.600490288 +0000 UTC m=+0.018355325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.698479207 +0000 UTC m=+0.116344224 container died 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:31:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-613bcf969651479d3ac147cd6c9096107c9d37ba5a87ee8c71de4424954c61e7-merged.mount: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163664]: 2026-02-02 17:31:54.736532971 +0000 UTC m=+0.154398018 container remove 8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:31:54 np0005605476 systemd[1]: libpod-conmon-8b549a671d1a145072d206a7a6c37e2576e69968c11b6639c73b1eb39dcb0603.scope: Deactivated successfully.
Feb  2 12:31:54 np0005605476 podman[163704]: 2026-02-02 17:31:54.886547157 +0000 UTC m=+0.062709682 container create 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:31:54 np0005605476 systemd[1]: Started libpod-conmon-38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09.scope.
Feb  2 12:31:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:31:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa123534cade2c00b1dd3b0c8c4d27a08a380dd0e69fe29780697f841b3d6b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa123534cade2c00b1dd3b0c8c4d27a08a380dd0e69fe29780697f841b3d6b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa123534cade2c00b1dd3b0c8c4d27a08a380dd0e69fe29780697f841b3d6b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa123534cade2c00b1dd3b0c8c4d27a08a380dd0e69fe29780697f841b3d6b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:31:54 np0005605476 podman[163704]: 2026-02-02 17:31:54.856100571 +0000 UTC m=+0.032263106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:31:54 np0005605476 podman[163704]: 2026-02-02 17:31:54.96975339 +0000 UTC m=+0.145915895 container init 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:31:54 np0005605476 podman[163704]: 2026-02-02 17:31:54.983319352 +0000 UTC m=+0.159481887 container start 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:31:54 np0005605476 podman[163704]: 2026-02-02 17:31:54.98725022 +0000 UTC m=+0.163412735 container attach 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:31:55 np0005605476 lvm[163799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:31:55 np0005605476 lvm[163799]: VG ceph_vg0 finished
Feb  2 12:31:55 np0005605476 lvm[163802]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:31:55 np0005605476 lvm[163802]: VG ceph_vg1 finished
Feb  2 12:31:55 np0005605476 lvm[163804]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:31:55 np0005605476 lvm[163804]: VG ceph_vg2 finished
Feb  2 12:31:55 np0005605476 mystifying_khayyam[163720]: {}
Feb  2 12:31:55 np0005605476 systemd[1]: libpod-38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09.scope: Deactivated successfully.
Feb  2 12:31:55 np0005605476 podman[163704]: 2026-02-02 17:31:55.705712153 +0000 UTC m=+0.881874668 container died 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:31:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3aa123534cade2c00b1dd3b0c8c4d27a08a380dd0e69fe29780697f841b3d6b1-merged.mount: Deactivated successfully.
Feb  2 12:31:55 np0005605476 podman[163704]: 2026-02-02 17:31:55.754344317 +0000 UTC m=+0.930506852 container remove 38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khayyam, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:31:55 np0005605476 systemd[1]: libpod-conmon-38033bd8aa591550f6d2c8570b3d02f5e3d278b305ea3f843ce3095050753e09.scope: Deactivated successfully.
Feb  2 12:31:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:31:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:31:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:56 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:31:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:31:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:04 np0005605476 kernel: SELinux:  Converting 2777 SID table entries...
Feb  2 12:32:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:32:04 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:32:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:13 np0005605476 kernel: SELinux:  Converting 2777 SID table entries...
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:32:13 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:32:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb  2 12:32:16 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb  2 12:32:16 np0005605476 podman[163863]: 2026-02-02 17:32:16.632351991 +0000 UTC m=+0.068387467 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:32:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:32:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:32:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:19 np0005605476 podman[163881]: 2026-02-02 17:32:19.645957317 +0000 UTC m=+0.098129473 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  2 12:32:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:32:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:32:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:32:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb  2 12:32:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:32:36
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups', 'vms', '.rgw.root', '.mgr']
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:32:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:32:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:32:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:32:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:32:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:32:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:32:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:32:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:32:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:32:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:32:47 np0005605476 podman[180784]: 2026-02-02 17:32:47.632648107 +0000 UTC m=+0.078265288 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:32:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:50 np0005605476 podman[180805]: 2026-02-02 17:32:50.670727066 +0000 UTC m=+0.125293009 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb  2 12:32:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:56 np0005605476 kernel: SELinux:  Converting 2778 SID table entries...
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability open_perms=1
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability always_check_network=0
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 12:32:56 np0005605476 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 12:32:56 np0005605476 podman[180937]: 2026-02-02 17:32:56.385255219 +0000 UTC m=+0.056558191 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:32:56 np0005605476 podman[180937]: 2026-02-02 17:32:56.479371905 +0000 UTC m=+0.150674857 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:32:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:57 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:32:57 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb  2 12:32:57 np0005605476 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:32:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:32:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:32:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:32:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.195575749 +0000 UTC m=+0.042646600 container create 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:32:58 np0005605476 systemd[1]: Started libpod-conmon-94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94.scope.
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.176230745 +0000 UTC m=+0.023301586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:32:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.307069174 +0000 UTC m=+0.154140015 container init 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.315607964 +0000 UTC m=+0.162678805 container start 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.319289698 +0000 UTC m=+0.166360619 container attach 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:32:58 np0005605476 vibrant_greider[181334]: 167 167
Feb  2 12:32:58 np0005605476 systemd[1]: libpod-94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94.scope: Deactivated successfully.
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.322819137 +0000 UTC m=+0.169889988 container died 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:32:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1ae59ccf33c7779ef4e80af48e4e7fc50190727239e14412f52830d146804e2e-merged.mount: Deactivated successfully.
Feb  2 12:32:58 np0005605476 podman[181306]: 2026-02-02 17:32:58.365407814 +0000 UTC m=+0.212478655 container remove 94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_greider, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:32:58 np0005605476 systemd[1]: libpod-conmon-94ea6d162c28ddf234d9f8d0f0d4aa04f2b999c3b4c7e36ad7f81af95061ce94.scope: Deactivated successfully.
Feb  2 12:32:58 np0005605476 podman[181357]: 2026-02-02 17:32:58.49401457 +0000 UTC m=+0.045851650 container create 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:32:58 np0005605476 systemd[1]: Started libpod-conmon-22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be.scope.
Feb  2 12:32:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:32:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:32:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:32:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:32:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:32:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:32:58 np0005605476 podman[181357]: 2026-02-02 17:32:58.473966246 +0000 UTC m=+0.025803376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:32:58 np0005605476 podman[181357]: 2026-02-02 17:32:58.581465609 +0000 UTC m=+0.133302709 container init 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:32:58 np0005605476 podman[181357]: 2026-02-02 17:32:58.586754268 +0000 UTC m=+0.138591348 container start 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:32:58 np0005605476 podman[181357]: 2026-02-02 17:32:58.591228244 +0000 UTC m=+0.143065324 container attach 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:32:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:32:59 np0005605476 adoring_allen[181373]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:32:59 np0005605476 adoring_allen[181373]: --> All data devices are unavailable
Feb  2 12:32:59 np0005605476 systemd[1]: libpod-22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be.scope: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181407]: 2026-02-02 17:32:59.122984474 +0000 UTC m=+0.025071366 container died 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:32:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-266e78af795277a075d19e2625a075d0e967bdbd195b51a540d61af0ed484755-merged.mount: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181407]: 2026-02-02 17:32:59.165232462 +0000 UTC m=+0.067319354 container remove 22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:32:59 np0005605476 systemd[1]: libpod-conmon-22003385a45226df5212f8ccb4b44076312de3f84845f571d66af598923e96be.scope: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.609902364 +0000 UTC m=+0.063733433 container create e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:32:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:32:59 np0005605476 systemd[1]: Started libpod-conmon-e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913.scope.
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.579344645 +0000 UTC m=+0.033175764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:32:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.703133306 +0000 UTC m=+0.156964385 container init e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.711579323 +0000 UTC m=+0.165410372 container start e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.714910417 +0000 UTC m=+0.168741556 container attach e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:32:59 np0005605476 lucid_fermi[181503]: 167 167
Feb  2 12:32:59 np0005605476 systemd[1]: libpod-e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913.scope: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.717358736 +0000 UTC m=+0.171189815 container died e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:32:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2c6a1627d61c8303691a85b2ae509ce0b7fcbf75ac9ac4b302b30330eae841ea-merged.mount: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181486]: 2026-02-02 17:32:59.764709987 +0000 UTC m=+0.218541066 container remove e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:32:59 np0005605476 systemd[1]: libpod-conmon-e1fc1263773d8135bf67078ef1235808bb0f80222d8b59efdfa518f80f5aa913.scope: Deactivated successfully.
Feb  2 12:32:59 np0005605476 podman[181527]: 2026-02-02 17:32:59.934804739 +0000 UTC m=+0.061926362 container create 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:32:59 np0005605476 systemd[1]: Started libpod-conmon-51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81.scope.
Feb  2 12:32:59 np0005605476 podman[181527]: 2026-02-02 17:32:59.906824013 +0000 UTC m=+0.033945726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:33:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:33:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba38dcf99eb2c3e48d274aa9b24e8bd8ac1bafe1aad2d093776b25e3610355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba38dcf99eb2c3e48d274aa9b24e8bd8ac1bafe1aad2d093776b25e3610355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba38dcf99eb2c3e48d274aa9b24e8bd8ac1bafe1aad2d093776b25e3610355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba38dcf99eb2c3e48d274aa9b24e8bd8ac1bafe1aad2d093776b25e3610355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:00 np0005605476 podman[181527]: 2026-02-02 17:33:00.061905133 +0000 UTC m=+0.189026816 container init 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:33:00 np0005605476 podman[181527]: 2026-02-02 17:33:00.071171664 +0000 UTC m=+0.198293317 container start 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:33:00 np0005605476 podman[181527]: 2026-02-02 17:33:00.075411413 +0000 UTC m=+0.202533106 container attach 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]: {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    "0": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "devices": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "/dev/loop3"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            ],
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_name": "ceph_lv0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_size": "21470642176",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "name": "ceph_lv0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "tags": {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_name": "ceph",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.crush_device_class": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.encrypted": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.objectstore": "bluestore",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_id": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.vdo": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.with_tpm": "0"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            },
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "vg_name": "ceph_vg0"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        }
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    ],
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    "1": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "devices": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "/dev/loop4"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            ],
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_name": "ceph_lv1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_size": "21470642176",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "name": "ceph_lv1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "tags": {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_name": "ceph",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.crush_device_class": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.encrypted": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.objectstore": "bluestore",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_id": "1",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.vdo": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.with_tpm": "0"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            },
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "vg_name": "ceph_vg1"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        }
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    ],
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    "2": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "devices": [
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "/dev/loop5"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            ],
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_name": "ceph_lv2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_size": "21470642176",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "name": "ceph_lv2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "tags": {
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.cluster_name": "ceph",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.crush_device_class": "",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.encrypted": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.objectstore": "bluestore",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osd_id": "2",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.vdo": "0",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:                "ceph.with_tpm": "0"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            },
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "type": "block",
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:            "vg_name": "ceph_vg2"
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:        }
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]:    ]
Feb  2 12:33:00 np0005605476 gracious_chatterjee[181544]: }
Feb  2 12:33:00 np0005605476 systemd[1]: libpod-51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81.scope: Deactivated successfully.
Feb  2 12:33:00 np0005605476 podman[181527]: 2026-02-02 17:33:00.393848436 +0000 UTC m=+0.520970049 container died 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:33:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-baba38dcf99eb2c3e48d274aa9b24e8bd8ac1bafe1aad2d093776b25e3610355-merged.mount: Deactivated successfully.
Feb  2 12:33:00 np0005605476 podman[181527]: 2026-02-02 17:33:00.438166482 +0000 UTC m=+0.565288105 container remove 51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_chatterjee, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:33:00 np0005605476 systemd[1]: libpod-conmon-51bb9900dcce384014c459840a4ccce665f1fc2fe33f17a93668d0d3aa409b81.scope: Deactivated successfully.
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.882447574 +0000 UTC m=+0.050066399 container create cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:33:00 np0005605476 systemd[1]: Started libpod-conmon-cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8.scope.
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.856789273 +0000 UTC m=+0.024408148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:33:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:33:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.968862544 +0000 UTC m=+0.136481409 container init cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.974455101 +0000 UTC m=+0.142073926 container start cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:33:00 np0005605476 focused_hellman[181760]: 167 167
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.978255048 +0000 UTC m=+0.145873913 container attach cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:33:00 np0005605476 systemd[1]: libpod-cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8.scope: Deactivated successfully.
Feb  2 12:33:00 np0005605476 podman[181720]: 2026-02-02 17:33:00.9794098 +0000 UTC m=+0.147028625 container died cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:33:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fa54982a506ae55f8431f7329f5972f0a9f308308e67c6da32bb18fa76cee6fe-merged.mount: Deactivated successfully.
Feb  2 12:33:01 np0005605476 podman[181720]: 2026-02-02 17:33:01.028306995 +0000 UTC m=+0.195925870 container remove cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hellman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:33:01 np0005605476 systemd[1]: libpod-conmon-cf251fe26891be22148ae8ffd82f5d686b5a9f9f0729c473e883ed11d2202bf8.scope: Deactivated successfully.
Feb  2 12:33:01 np0005605476 podman[181807]: 2026-02-02 17:33:01.178808927 +0000 UTC m=+0.051405027 container create f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:33:01 np0005605476 systemd[1]: Started libpod-conmon-f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5.scope.
Feb  2 12:33:01 np0005605476 podman[181807]: 2026-02-02 17:33:01.162450727 +0000 UTC m=+0.035046857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:33:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:33:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387f25cec36de6c2328fe28fa55b22d3ec27d8f52f0fb66279344caec6d70f7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387f25cec36de6c2328fe28fa55b22d3ec27d8f52f0fb66279344caec6d70f7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387f25cec36de6c2328fe28fa55b22d3ec27d8f52f0fb66279344caec6d70f7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387f25cec36de6c2328fe28fa55b22d3ec27d8f52f0fb66279344caec6d70f7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:33:01 np0005605476 podman[181807]: 2026-02-02 17:33:01.280133216 +0000 UTC m=+0.152729326 container init f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:33:01 np0005605476 podman[181807]: 2026-02-02 17:33:01.290199219 +0000 UTC m=+0.162795349 container start f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:33:01 np0005605476 podman[181807]: 2026-02-02 17:33:01.294813758 +0000 UTC m=+0.167409868 container attach f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:33:01 np0005605476 lvm[181933]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:33:01 np0005605476 lvm[181933]: VG ceph_vg2 finished
Feb  2 12:33:01 np0005605476 lvm[181930]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:33:01 np0005605476 lvm[181930]: VG ceph_vg0 finished
Feb  2 12:33:01 np0005605476 lvm[181931]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:33:01 np0005605476 lvm[181931]: VG ceph_vg1 finished
Feb  2 12:33:01 np0005605476 inspiring_ganguly[181841]: {}
Feb  2 12:33:02 np0005605476 systemd[1]: libpod-f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5.scope: Deactivated successfully.
Feb  2 12:33:02 np0005605476 podman[181807]: 2026-02-02 17:33:02.036507262 +0000 UTC m=+0.909103342 container died f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:33:02 np0005605476 systemd[1]: libpod-f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5.scope: Consumed 1.048s CPU time.
Feb  2 12:33:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-387f25cec36de6c2328fe28fa55b22d3ec27d8f52f0fb66279344caec6d70f7b-merged.mount: Deactivated successfully.
Feb  2 12:33:02 np0005605476 podman[181807]: 2026-02-02 17:33:02.079203553 +0000 UTC m=+0.951799653 container remove f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:33:02 np0005605476 systemd[1]: libpod-conmon-f889dbc28972365d21f8520d97e51e4764d5135867b64061ccd0ddc22688e2e5.scope: Deactivated successfully.
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:33:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:33:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:04 np0005605476 systemd[1]: Stopping OpenSSH server daemon...
Feb  2 12:33:04 np0005605476 systemd[1]: sshd.service: Deactivated successfully.
Feb  2 12:33:04 np0005605476 systemd[1]: Stopped OpenSSH server daemon.
Feb  2 12:33:04 np0005605476 systemd[1]: sshd.service: Consumed 2.262s CPU time, read 564.0K from disk, written 20.0K to disk.
Feb  2 12:33:04 np0005605476 systemd[1]: Stopped target sshd-keygen.target.
Feb  2 12:33:04 np0005605476 systemd[1]: Stopping sshd-keygen.target...
Feb  2 12:33:04 np0005605476 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 12:33:04 np0005605476 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 12:33:04 np0005605476 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 12:33:04 np0005605476 systemd[1]: Reached target sshd-keygen.target.
Feb  2 12:33:04 np0005605476 systemd[1]: Starting OpenSSH server daemon...
Feb  2 12:33:04 np0005605476 systemd[1]: Started OpenSSH server daemon.
Feb  2 12:33:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:06 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:33:06 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:33:06 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:06 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:06 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:06 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:33:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:10 np0005605476 python3.9[188154]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:33:10 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:10 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:10 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:11 np0005605476 python3.9[189611]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:33:11 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:11 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:11 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:12 np0005605476 python3.9[191271]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:33:12 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:12 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:12 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:12 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:33:12 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:33:12 np0005605476 systemd[1]: man-db-cache-update.service: Consumed 7.743s CPU time.
Feb  2 12:33:12 np0005605476 systemd[1]: run-r74ae78a1d8a4414284a32a714d19ce04.service: Deactivated successfully.
Feb  2 12:33:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:13 np0005605476 python3.9[191959]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:33:13 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:13 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:13 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:14 np0005605476 python3.9[192149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:14 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:14 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:14 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:15 np0005605476 python3.9[192339]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:15 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:15 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:15 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:16 np0005605476 python3.9[192530]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:16 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:16 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:16 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:17 np0005605476 python3.9[192719]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:17 np0005605476 podman[192721]: 2026-02-02 17:33:17.785707283 +0000 UTC m=+0.065146193 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Feb  2 12:33:18 np0005605476 python3.9[192893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:18 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:18 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:18 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:19 np0005605476 python3.9[193084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 12:33:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:19 np0005605476 systemd[1]: Reloading.
Feb  2 12:33:19 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:33:19 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:33:19 np0005605476 systemd[1]: Listening on libvirt proxy daemon socket.
Feb  2 12:33:19 np0005605476 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb  2 12:33:20 np0005605476 python3.9[193277]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:20 np0005605476 podman[193279]: 2026-02-02 17:33:20.841508261 +0000 UTC m=+0.067002855 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible)
Feb  2 12:33:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:21 np0005605476 python3.9[193458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:23 np0005605476 python3.9[193613]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:23 np0005605476 python3.9[193768]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:24 np0005605476 python3.9[193923]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:25 np0005605476 python3.9[194078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:26 np0005605476 python3.9[194233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:27 np0005605476 python3.9[194388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:27 np0005605476 python3.9[194543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:28 np0005605476 python3.9[194698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:30 np0005605476 python3.9[194853]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:31 np0005605476 python3.9[195008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:31 np0005605476 python3.9[195163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:32 np0005605476 python3.9[195318]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 12:33:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:33 np0005605476 python3.9[195473]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:34 np0005605476 python3.9[195625]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:34 np0005605476 python3.9[195777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:35 np0005605476 python3.9[195929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:35 np0005605476 python3.9[196081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:36 np0005605476 python3.9[196233]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:33:36
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:33:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:37 np0005605476 python3.9[196383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:33:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:33:38 np0005605476 python3.9[196535]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:38 np0005605476 python3.9[196660]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053617.5975187-557-97929408600281/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:39 np0005605476 python3.9[196812]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:40 np0005605476 python3.9[196937]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053619.1464436-557-30654709738802/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:40 np0005605476 python3.9[197089]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:41 np0005605476 python3.9[197214]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053620.2164693-557-96733859383474/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:41 np0005605476 python3.9[197366]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:42 np0005605476 python3.9[197491]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053621.38263-557-155700575740573/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:42 np0005605476 python3.9[197643]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:43 np0005605476 python3.9[197768]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053622.5592558-557-36849332100193/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:44 np0005605476 python3.9[197920]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:44 np0005605476 python3.9[198045]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053623.621853-557-271043438237031/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:45 np0005605476 python3.9[198197]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:45 np0005605476 python3.9[198320]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053624.805258-557-188948982314822/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:46 np0005605476 python3.9[198472]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:33:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:33:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:33:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:33:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:33:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:33:46 np0005605476 python3.9[198597]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770053625.929543-557-165065620164675/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:33:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:33:47 np0005605476 python3.9[198749]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb  2 12:33:48 np0005605476 podman[198874]: 2026-02-02 17:33:48.054812783 +0000 UTC m=+0.058377213 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:33:48 np0005605476 python3.9[198918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:48 np0005605476 python3.9[199074]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:49 np0005605476 python3.9[199226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:50 np0005605476 python3.9[199378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:50 np0005605476 python3.9[199530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:51 np0005605476 podman[199654]: 2026-02-02 17:33:51.389591805 +0000 UTC m=+0.101193297 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb  2 12:33:51 np0005605476 python3.9[199697]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:52 np0005605476 python3.9[199858]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:52 np0005605476 python3.9[200010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:53 np0005605476 python3.9[200162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:54 np0005605476 python3.9[200314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:33:54 np0005605476 python3.9[200466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:55 np0005605476 python3.9[200618]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:56 np0005605476 python3.9[200770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:56 np0005605476 python3.9[200922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:57 np0005605476 python3.9[201074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:57 np0005605476 python3.9[201197]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053636.957474-778-43959505850275/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:58 np0005605476 python3.9[201349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:33:59 np0005605476 python3.9[201472]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053638.1265156-778-226765219139253/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:33:59 np0005605476 python3.9[201624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:33:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:00 np0005605476 python3.9[201747]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053639.1654801-778-275084074030059/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:00 np0005605476 python3.9[201899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:01 np0005605476 python3.9[202022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053640.236608-778-109593135672246/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:01 np0005605476 python3.9[202174]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:02 np0005605476 python3.9[202323]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053641.388265-778-150480050954998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:34:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:34:02 np0005605476 python3.9[202530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.169147987 +0000 UTC m=+0.035039307 container create ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:34:03 np0005605476 systemd[1]: Started libpod-conmon-ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8.scope.
Feb  2 12:34:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.228001182 +0000 UTC m=+0.093892522 container init ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.232786787 +0000 UTC m=+0.098678127 container start ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb  2 12:34:03 np0005605476 tender_northcutt[202702]: 167 167
Feb  2 12:34:03 np0005605476 systemd[1]: libpod-ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8.scope: Deactivated successfully.
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.237849719 +0000 UTC m=+0.103741049 container attach ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.238333353 +0000 UTC m=+0.104224683 container died ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.155476802 +0000 UTC m=+0.021368132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-cd6e7a50c1add54d93fc40d3bd6a8f017007ea406294b72fe15f21baa4942828-merged.mount: Deactivated successfully.
Feb  2 12:34:03 np0005605476 podman[202662]: 2026-02-02 17:34:03.272544785 +0000 UTC m=+0.138436115 container remove ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:34:03 np0005605476 systemd[1]: libpod-conmon-ddb9472348578c31d68db686fe141f7fc4696d00d5db86fedf78e5bed29a2db8.scope: Deactivated successfully.
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.403802276 +0000 UTC m=+0.044536404 container create 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:34:03 np0005605476 python3.9[202735]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053642.550985-778-123492140478538/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:03 np0005605476 systemd[1]: Started libpod-conmon-80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09.scope.
Feb  2 12:34:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.379952055 +0000 UTC m=+0.020686223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.487247863 +0000 UTC m=+0.127981961 container init 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.492807949 +0000 UTC m=+0.133542037 container start 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.496220845 +0000 UTC m=+0.136954933 container attach 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:34:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:34:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:34:03 np0005605476 sweet_mestorf[202772]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:34:03 np0005605476 sweet_mestorf[202772]: --> All data devices are unavailable
Feb  2 12:34:03 np0005605476 systemd[1]: libpod-80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09.scope: Deactivated successfully.
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.936016236 +0000 UTC m=+0.576750344 container died 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:34:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3c8fb4e0daf04c75c53fdb64fa83947d7bd102628a77e5ba0692c00dcf5abd28-merged.mount: Deactivated successfully.
Feb  2 12:34:03 np0005605476 podman[202755]: 2026-02-02 17:34:03.974162819 +0000 UTC m=+0.614896897 container remove 80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:34:03 np0005605476 systemd[1]: libpod-conmon-80080be22e8245b699a4b8c9e5be844ad5f4f1801f826aa183173d3fa3aa3d09.scope: Deactivated successfully.
Feb  2 12:34:04 np0005605476 python3.9[202938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.370407985 +0000 UTC m=+0.048291870 container create 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:34:04 np0005605476 systemd[1]: Started libpod-conmon-97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce.scope.
Feb  2 12:34:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.345686769 +0000 UTC m=+0.023570744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.444791597 +0000 UTC m=+0.122675492 container init 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.453503302 +0000 UTC m=+0.131387187 container start 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.456527777 +0000 UTC m=+0.134411692 container attach 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:34:04 np0005605476 heuristic_rhodes[203156]: 167 167
Feb  2 12:34:04 np0005605476 systemd[1]: libpod-97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce.scope: Deactivated successfully.
Feb  2 12:34:04 np0005605476 conmon[203156]: conmon 97455d33205b152e5352 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce.scope/container/memory.events
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.460014815 +0000 UTC m=+0.137898730 container died 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:34:04 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8ac5ee99e3d0c895e5d1f3fa750dd4e28938dddc1323105542697e72c7e0fd57-merged.mount: Deactivated successfully.
Feb  2 12:34:04 np0005605476 podman[203139]: 2026-02-02 17:34:04.500032301 +0000 UTC m=+0.177916206 container remove 97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:34:04 np0005605476 python3.9[203141]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053643.5644016-778-39672546200206/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:04 np0005605476 systemd[1]: libpod-conmon-97455d33205b152e5352803c0a6de7aea4f00bb1fe5f818abbc4bd8696d91cce.scope: Deactivated successfully.
Feb  2 12:34:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:04 np0005605476 podman[203204]: 2026-02-02 17:34:04.67917376 +0000 UTC m=+0.051387687 container create b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:34:04 np0005605476 systemd[1]: Started libpod-conmon-b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38.scope.
Feb  2 12:34:04 np0005605476 podman[203204]: 2026-02-02 17:34:04.653549419 +0000 UTC m=+0.025763436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4446cb7e808805adbadb04fe680f6499b6aba42bcb87e626ce20501bb6ee85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4446cb7e808805adbadb04fe680f6499b6aba42bcb87e626ce20501bb6ee85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4446cb7e808805adbadb04fe680f6499b6aba42bcb87e626ce20501bb6ee85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4446cb7e808805adbadb04fe680f6499b6aba42bcb87e626ce20501bb6ee85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:04 np0005605476 podman[203204]: 2026-02-02 17:34:04.781945781 +0000 UTC m=+0.154159768 container init b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:34:04 np0005605476 podman[203204]: 2026-02-02 17:34:04.789991797 +0000 UTC m=+0.162205724 container start b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:34:04 np0005605476 podman[203204]: 2026-02-02 17:34:04.793215577 +0000 UTC m=+0.165429504 container attach b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:34:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:05 np0005605476 python3.9[203353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:05 np0005605476 brave_golick[203272]: {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    "0": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "devices": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "/dev/loop3"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            ],
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_name": "ceph_lv0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_size": "21470642176",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "name": "ceph_lv0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "tags": {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_name": "ceph",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.crush_device_class": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.encrypted": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.objectstore": "bluestore",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_id": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.vdo": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.with_tpm": "0"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            },
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "vg_name": "ceph_vg0"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        }
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    ],
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    "1": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "devices": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "/dev/loop4"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            ],
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_name": "ceph_lv1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_size": "21470642176",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "name": "ceph_lv1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "tags": {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_name": "ceph",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.crush_device_class": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.encrypted": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.objectstore": "bluestore",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_id": "1",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.vdo": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.with_tpm": "0"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            },
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "vg_name": "ceph_vg1"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        }
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    ],
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    "2": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "devices": [
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "/dev/loop5"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            ],
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_name": "ceph_lv2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_size": "21470642176",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "name": "ceph_lv2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "tags": {
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.cluster_name": "ceph",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.crush_device_class": "",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.encrypted": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.objectstore": "bluestore",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osd_id": "2",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.vdo": "0",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:                "ceph.with_tpm": "0"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            },
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "type": "block",
Feb  2 12:34:05 np0005605476 brave_golick[203272]:            "vg_name": "ceph_vg2"
Feb  2 12:34:05 np0005605476 brave_golick[203272]:        }
Feb  2 12:34:05 np0005605476 brave_golick[203272]:    ]
Feb  2 12:34:05 np0005605476 brave_golick[203272]: }
Feb  2 12:34:05 np0005605476 systemd[1]: libpod-b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38.scope: Deactivated successfully.
Feb  2 12:34:05 np0005605476 podman[203204]: 2026-02-02 17:34:05.152293978 +0000 UTC m=+0.524507905 container died b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:34:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8b4446cb7e808805adbadb04fe680f6499b6aba42bcb87e626ce20501bb6ee85-merged.mount: Deactivated successfully.
Feb  2 12:34:05 np0005605476 podman[203204]: 2026-02-02 17:34:05.188925618 +0000 UTC m=+0.561139535 container remove b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:34:05 np0005605476 systemd[1]: libpod-conmon-b5cc41a5d9df91c8b9d7c417f6d969cafd0e0f4f8e7fb86212c1d301830f3c38.scope: Deactivated successfully.
Feb  2 12:34:05 np0005605476 python3.9[203540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053644.6715488-778-69336492460675/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.590580796 +0000 UTC m=+0.055573574 container create 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:34:05 np0005605476 systemd[1]: Started libpod-conmon-102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919.scope.
Feb  2 12:34:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.657491178 +0000 UTC m=+0.122483996 container init 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.568599768 +0000 UTC m=+0.033592646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.665914875 +0000 UTC m=+0.130907663 container start 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.669218688 +0000 UTC m=+0.134211546 container attach 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:34:05 np0005605476 affectionate_kepler[203572]: 167 167
Feb  2 12:34:05 np0005605476 systemd[1]: libpod-102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919.scope: Deactivated successfully.
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.673775516 +0000 UTC m=+0.138768324 container died 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:34:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-96e984a09ba9b5e08b9bb2d1d71e224458837d2e930f70101911d4a664a9c65f-merged.mount: Deactivated successfully.
Feb  2 12:34:05 np0005605476 podman[203554]: 2026-02-02 17:34:05.717718132 +0000 UTC m=+0.182710960 container remove 102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:34:05 np0005605476 systemd[1]: libpod-conmon-102a29decee547e4bfb6791380d7a5d235e4513bf798e88db368e767cd828919.scope: Deactivated successfully.
Feb  2 12:34:05 np0005605476 podman[203671]: 2026-02-02 17:34:05.876804147 +0000 UTC m=+0.047884998 container create 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:34:05 np0005605476 systemd[1]: Started libpod-conmon-814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd.scope.
Feb  2 12:34:05 np0005605476 podman[203671]: 2026-02-02 17:34:05.858336537 +0000 UTC m=+0.029417398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:34:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:34:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6f0cd03521a150271ff5ac8ba182a1be8b9269910f66b4bbd06dca8edcfdd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6f0cd03521a150271ff5ac8ba182a1be8b9269910f66b4bbd06dca8edcfdd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6f0cd03521a150271ff5ac8ba182a1be8b9269910f66b4bbd06dca8edcfdd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6f0cd03521a150271ff5ac8ba182a1be8b9269910f66b4bbd06dca8edcfdd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:34:06 np0005605476 podman[203671]: 2026-02-02 17:34:06.003891592 +0000 UTC m=+0.174972463 container init 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:34:06 np0005605476 podman[203671]: 2026-02-02 17:34:06.01201631 +0000 UTC m=+0.183097141 container start 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:34:06 np0005605476 podman[203671]: 2026-02-02 17:34:06.018588155 +0000 UTC m=+0.189669026 container attach 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:34:06 np0005605476 python3.9[203768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:06 np0005605476 lvm[203963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:34:06 np0005605476 lvm[203963]: VG ceph_vg0 finished
Feb  2 12:34:06 np0005605476 lvm[203966]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:34:06 np0005605476 lvm[203966]: VG ceph_vg1 finished
Feb  2 12:34:06 np0005605476 lvm[203968]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:34:06 np0005605476 lvm[203968]: VG ceph_vg2 finished
Feb  2 12:34:06 np0005605476 python3.9[203960]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053645.7359798-778-99885022994144/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:06 np0005605476 magical_aryabhata[203712]: {}
Feb  2 12:34:06 np0005605476 systemd[1]: libpod-814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd.scope: Deactivated successfully.
Feb  2 12:34:06 np0005605476 podman[203671]: 2026-02-02 17:34:06.816827708 +0000 UTC m=+0.987908569 container died 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:34:06 np0005605476 systemd[1]: libpod-814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd.scope: Consumed 1.087s CPU time.
Feb  2 12:34:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6f6f0cd03521a150271ff5ac8ba182a1be8b9269910f66b4bbd06dca8edcfdd3-merged.mount: Deactivated successfully.
Feb  2 12:34:06 np0005605476 podman[203671]: 2026-02-02 17:34:06.871871296 +0000 UTC m=+1.042952117 container remove 814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:34:06 np0005605476 systemd[1]: libpod-conmon-814ba0779719f0c7279d45a13ee58060b2848d392a5ef16a8da3e2c0779b85cd.scope: Deactivated successfully.
Feb  2 12:34:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:34:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:34:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:07 np0005605476 python3.9[204161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:07 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:07 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:34:07 np0005605476 python3.9[204284]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053646.8953257-778-160703214271824/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:08 np0005605476 python3.9[204436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:09 np0005605476 python3.9[204559]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053648.1109297-778-209712681550926/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:09 np0005605476 python3.9[204711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:10 np0005605476 python3.9[204834]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053649.3732028-778-228909612177236/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:11 np0005605476 python3.9[204986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:11 np0005605476 auditd[704]: Audit daemon rotating log files
Feb  2 12:34:11 np0005605476 python3.9[205109]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053650.5453897-778-258307214690862/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:12 np0005605476 python3.9[205261]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:12 np0005605476 python3.9[205384]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053651.6995103-778-138175578940082/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:13 np0005605476 python3.9[205534]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:13 np0005605476 python3.9[205689]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb  2 12:34:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:15 np0005605476 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb  2 12:34:15 np0005605476 python3.9[205845]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:15 np0005605476 python3.9[205997]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:16 np0005605476 python3.9[206149]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:16 np0005605476 python3.9[206301]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:17 np0005605476 python3.9[206453]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:18 np0005605476 python3.9[206605]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:18 np0005605476 podman[206729]: 2026-02-02 17:34:18.434341233 +0000 UTC m=+0.045298545 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:34:18 np0005605476 python3.9[206768]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:19 np0005605476 python3.9[206926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:19 np0005605476 python3.9[207078]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:20 np0005605476 python3.9[207230]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:20 np0005605476 python3.9[207382]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:34:20 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:20 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:20 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:21 np0005605476 systemd[1]: Starting libvirt logging daemon socket...
Feb  2 12:34:21 np0005605476 systemd[1]: Listening on libvirt logging daemon socket.
Feb  2 12:34:21 np0005605476 systemd[1]: Starting libvirt logging daemon admin socket...
Feb  2 12:34:21 np0005605476 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb  2 12:34:21 np0005605476 systemd[1]: Starting libvirt logging daemon...
Feb  2 12:34:21 np0005605476 systemd[1]: Started libvirt logging daemon.
Feb  2 12:34:21 np0005605476 podman[207506]: 2026-02-02 17:34:21.702157589 +0000 UTC m=+0.148949510 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true)
Feb  2 12:34:22 np0005605476 python3.9[207602]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:34:22 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:22 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:22 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:22 np0005605476 systemd[1]: Starting libvirt nodedev daemon socket...
Feb  2 12:34:22 np0005605476 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb  2 12:34:22 np0005605476 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb  2 12:34:22 np0005605476 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb  2 12:34:22 np0005605476 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb  2 12:34:22 np0005605476 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb  2 12:34:22 np0005605476 systemd[1]: Starting libvirt nodedev daemon...
Feb  2 12:34:22 np0005605476 systemd[1]: Started libvirt nodedev daemon.
Feb  2 12:34:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:23 np0005605476 python3.9[207819]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:34:23 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:23 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:23 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:23 np0005605476 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb  2 12:34:23 np0005605476 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb  2 12:34:23 np0005605476 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb  2 12:34:23 np0005605476 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb  2 12:34:23 np0005605476 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb  2 12:34:23 np0005605476 systemd[1]: Starting libvirt proxy daemon...
Feb  2 12:34:23 np0005605476 systemd[1]: Started libvirt proxy daemon.
Feb  2 12:34:23 np0005605476 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb  2 12:34:23 np0005605476 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb  2 12:34:23 np0005605476 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb  2 12:34:24 np0005605476 python3.9[208038]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:34:24 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:24 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:24 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:24 np0005605476 systemd[1]: Listening on libvirt locking daemon socket.
Feb  2 12:34:24 np0005605476 systemd[1]: Starting libvirt QEMU daemon socket...
Feb  2 12:34:24 np0005605476 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  2 12:34:24 np0005605476 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb  2 12:34:24 np0005605476 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb  2 12:34:24 np0005605476 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb  2 12:34:24 np0005605476 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb  2 12:34:24 np0005605476 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb  2 12:34:24 np0005605476 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb  2 12:34:24 np0005605476 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb  2 12:34:24 np0005605476 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 12:34:24 np0005605476 systemd[1]: Started libvirt QEMU daemon.
Feb  2 12:34:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:24 np0005605476 setroubleshoot[207856]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c5d29da1-0dfd-4319-8684-8d4ed243348d
Feb  2 12:34:24 np0005605476 setroubleshoot[207856]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 12:34:24 np0005605476 setroubleshoot[207856]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c5d29da1-0dfd-4319-8684-8d4ed243348d
Feb  2 12:34:24 np0005605476 setroubleshoot[207856]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 12:34:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:25 np0005605476 python3.9[208256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:34:25 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:25 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:25 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:25 np0005605476 systemd[1]: Starting libvirt secret daemon socket...
Feb  2 12:34:25 np0005605476 systemd[1]: Listening on libvirt secret daemon socket.
Feb  2 12:34:25 np0005605476 systemd[1]: Starting libvirt secret daemon admin socket...
Feb  2 12:34:25 np0005605476 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb  2 12:34:25 np0005605476 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb  2 12:34:25 np0005605476 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb  2 12:34:25 np0005605476 systemd[1]: Starting libvirt secret daemon...
Feb  2 12:34:25 np0005605476 systemd[1]: Started libvirt secret daemon.
Feb  2 12:34:26 np0005605476 python3.9[208468]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:27 np0005605476 python3.9[208620]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:34:28 np0005605476 python3.9[208772]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:29 np0005605476 python3.9[208926]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:34:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:29 np0005605476 python3.9[209076]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:30 np0005605476 python3.9[209197]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053669.3761694-1136-63616740965777/.source.xml follow=False _original_basename=secret.xml.j2 checksum=4253e50aa8d0ace256d1b9c6cc98c1f62a83524a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:30 np0005605476 python3.9[209349]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine eb48d0ef-3496-563c-b73d-661fb962013e#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:31 np0005605476 python3.9[209511]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.798748) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671798793, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3577412, "memory_usage": 3620560, "flush_reason": "Manual Compaction"}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671816528, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3490209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9710, "largest_seqno": 11753, "table_properties": {"data_size": 3480898, "index_size": 5933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17868, "raw_average_key_size": 19, "raw_value_size": 3462472, "raw_average_value_size": 3771, "num_data_blocks": 269, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053441, "oldest_key_time": 1770053441, "file_creation_time": 1770053671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17843 microseconds, and 9255 cpu microseconds.
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.816590) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3490209 bytes OK
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.816615) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.818123) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.818147) EVENT_LOG_v1 {"time_micros": 1770053671818140, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.818171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3568877, prev total WAL file size 3568877, number of live WAL files 2.
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.819026) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3408KB)], [26(6167KB)]
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671819113, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9805483, "oldest_snapshot_seqno": -1}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3710 keys, 8173197 bytes, temperature: kUnknown
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671861133, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8173197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8144534, "index_size": 18313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 89051, "raw_average_key_size": 24, "raw_value_size": 8073672, "raw_average_value_size": 2176, "num_data_blocks": 796, "num_entries": 3710, "num_filter_entries": 3710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770053671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.861391) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8173197 bytes
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.862476) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.9 rd, 194.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4224, records dropped: 514 output_compression: NoCompression
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.862505) EVENT_LOG_v1 {"time_micros": 1770053671862492, "job": 10, "event": "compaction_finished", "compaction_time_micros": 42098, "compaction_time_cpu_micros": 22457, "output_level": 6, "num_output_files": 1, "total_output_size": 8173197, "num_input_records": 4224, "num_output_records": 3710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671863019, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053671863786, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.818899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.863834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.863842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.863846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.863850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:31 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:34:31.863854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:34:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:33 np0005605476 python3.9[209974]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:34 np0005605476 python3.9[210126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:34 np0005605476 python3.9[210249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053673.5262022-1191-116435024069851/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:34 np0005605476 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb  2 12:34:34 np0005605476 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb  2 12:34:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:35 np0005605476 python3.9[210401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:36 np0005605476 python3.9[210553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:36 np0005605476 python3.9[210631]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:34:36
Feb  2 12:34:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:34:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:34:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.data', 'volumes']
Feb  2 12:34:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:37 np0005605476 python3.9[210783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:34:37 np0005605476 python3.9[210861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9plwrlux recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:34:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:34:38 np0005605476 python3.9[211013]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:38 np0005605476 python3.9[211091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:39 np0005605476 python3.9[211243]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:40 np0005605476 python3[211396]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 12:34:40 np0005605476 python3.9[211548]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:41 np0005605476 python3.9[211626]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:41 np0005605476 python3.9[211778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:42 np0005605476 python3.9[211903]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053681.3649857-1280-83886303263266/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:43 np0005605476 python3.9[212055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:43 np0005605476 python3.9[212133]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:44 np0005605476 python3.9[212285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:44 np0005605476 python3.9[212363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:45 np0005605476 python3.9[212515]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:45 np0005605476 python3.9[212640]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770053684.727962-1319-158314795569145/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:46 np0005605476 python3.9[212792]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:34:46.621 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:34:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:34:46.622 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:34:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:34:46.623 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:47 np0005605476 python3.9[212944]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:34:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:34:48 np0005605476 python3.9[213099]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:48 np0005605476 podman[213176]: 2026-02-02 17:34:48.640465817 +0000 UTC m=+0.070265638 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:34:48 np0005605476 python3.9[213270]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:49 np0005605476 python3.9[213423]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:34:50 np0005605476 python3.9[213577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:34:50 np0005605476 python3.9[213732]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:51 np0005605476 python3.9[213884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:51 np0005605476 podman[214007]: 2026-02-02 17:34:51.870098419 +0000 UTC m=+0.086314699 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 12:34:51 np0005605476 python3.9[214008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053691.0876281-1391-82212544853522/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:52 np0005605476 python3.9[214185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:53 np0005605476 python3.9[214308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053692.1085587-1406-212129300084013/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:53 np0005605476 python3.9[214460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:34:54 np0005605476 python3.9[214583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053693.310585-1421-5385945901119/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:34:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:34:54 np0005605476 python3.9[214735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:34:54 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:54 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:54 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:55 np0005605476 systemd[1]: Reached target edpm_libvirt.target.
Feb  2 12:34:55 np0005605476 python3.9[214926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 12:34:55 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:55 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:55 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:56 np0005605476 systemd[1]: Reloading.
Feb  2 12:34:56 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:34:56 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:34:56 np0005605476 systemd[1]: session-48.scope: Deactivated successfully.
Feb  2 12:34:56 np0005605476 systemd[1]: session-48.scope: Consumed 3min 2.163s CPU time.
Feb  2 12:34:56 np0005605476 systemd-logind[799]: Session 48 logged out. Waiting for processes to exit.
Feb  2 12:34:56 np0005605476 systemd-logind[799]: Removed session 48.
Feb  2 12:34:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:34:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:02 np0005605476 systemd-logind[799]: New session 49 of user zuul.
Feb  2 12:35:02 np0005605476 systemd[1]: Started Session 49 of User zuul.
Feb  2 12:35:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:03 np0005605476 python3.9[215175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:35:04 np0005605476 python3.9[215329]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:35:04 np0005605476 network[215346]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:35:04 np0005605476 network[215347]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:35:04 np0005605476 network[215348]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:35:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:35:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:35:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:35:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:08 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.159179806 +0000 UTC m=+0.047349650 container create 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:35:08 np0005605476 systemd[1]: Started libpod-conmon-5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8.scope.
Feb  2 12:35:08 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.13421184 +0000 UTC m=+0.022381754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.240979634 +0000 UTC m=+0.129149478 container init 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.249264445 +0000 UTC m=+0.137434269 container start 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.252510435 +0000 UTC m=+0.140680279 container attach 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:35:08 np0005605476 vigorous_neumann[215652]: 167 167
Feb  2 12:35:08 np0005605476 systemd[1]: libpod-5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8.scope: Deactivated successfully.
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.255073157 +0000 UTC m=+0.143242981 container died 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:35:08 np0005605476 systemd[1]: var-lib-containers-storage-overlay-83ebc42db9c000b64ade3fa7720439821e45b1db8eeb791e4181e6a772b27d67-merged.mount: Deactivated successfully.
Feb  2 12:35:08 np0005605476 podman[215635]: 2026-02-02 17:35:08.295636317 +0000 UTC m=+0.183806141 container remove 5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_neumann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:35:08 np0005605476 systemd[1]: libpod-conmon-5577ae9abae2735eb1d51e61853373a3d58b73f25341776b73d105bb157f09b8.scope: Deactivated successfully.
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.440214704 +0000 UTC m=+0.042447903 container create a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:35:08 np0005605476 systemd[1]: Started libpod-conmon-a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0.scope.
Feb  2 12:35:08 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.420410652 +0000 UTC m=+0.022643841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.534881081 +0000 UTC m=+0.137114270 container init a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.543312106 +0000 UTC m=+0.145545275 container start a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.546826044 +0000 UTC m=+0.149059213 container attach a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:35:08 np0005605476 silly_benz[215692]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:35:08 np0005605476 silly_benz[215692]: --> All data devices are unavailable
Feb  2 12:35:08 np0005605476 systemd[1]: libpod-a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0.scope: Deactivated successfully.
Feb  2 12:35:08 np0005605476 podman[215676]: 2026-02-02 17:35:08.983916559 +0000 UTC m=+0.586149738 container died a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:35:09 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4e6d469bdbe063e22bda9256f4d306a93f2dc4a8fd96e0c435f6b93d5a134802-merged.mount: Deactivated successfully.
Feb  2 12:35:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:09 np0005605476 podman[215676]: 2026-02-02 17:35:09.031015751 +0000 UTC m=+0.633248900 container remove a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:35:09 np0005605476 systemd[1]: libpod-conmon-a9823ffd459b7771e90fbbcb0aff6ebeb35ab9600e81308f1c9cf0333f5226f0.scope: Deactivated successfully.
Feb  2 12:35:09 np0005605476 python3.9[215838]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.453098429 +0000 UTC m=+0.040319614 container create a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:35:09 np0005605476 systemd[1]: Started libpod-conmon-a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a.scope.
Feb  2 12:35:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.525559778 +0000 UTC m=+0.112780993 container init a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.435399466 +0000 UTC m=+0.022620701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.532547102 +0000 UTC m=+0.119768287 container start a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.535801783 +0000 UTC m=+0.123022968 container attach a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:35:09 np0005605476 nostalgic_wiles[215935]: 167 167
Feb  2 12:35:09 np0005605476 systemd[1]: libpod-a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a.scope: Deactivated successfully.
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.537574272 +0000 UTC m=+0.124795457 container died a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:35:09 np0005605476 systemd[1]: var-lib-containers-storage-overlay-649d0e69ef2334e1b1337a3e5a7901bfaad07b6b8cfd862b1e596fb06a3acca7-merged.mount: Deactivated successfully.
Feb  2 12:35:09 np0005605476 podman[215919]: 2026-02-02 17:35:09.56798755 +0000 UTC m=+0.155208735 container remove a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wiles, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:35:09 np0005605476 systemd[1]: libpod-conmon-a0fdf646db0bdb740856cba3209550d94c6121b0a3099ae8eadb256747f25c4a.scope: Deactivated successfully.
Feb  2 12:35:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:09 np0005605476 podman[215957]: 2026-02-02 17:35:09.725329192 +0000 UTC m=+0.053480880 container create 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:35:09 np0005605476 systemd[1]: Started libpod-conmon-1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da.scope.
Feb  2 12:35:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:09 np0005605476 podman[215957]: 2026-02-02 17:35:09.698153296 +0000 UTC m=+0.026305084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08bba378259db74b0b866337e964db90813330373befbf5945e9b2ecfccb85f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08bba378259db74b0b866337e964db90813330373befbf5945e9b2ecfccb85f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08bba378259db74b0b866337e964db90813330373befbf5945e9b2ecfccb85f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08bba378259db74b0b866337e964db90813330373befbf5945e9b2ecfccb85f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:09 np0005605476 podman[215957]: 2026-02-02 17:35:09.820758561 +0000 UTC m=+0.148910329 container init 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:35:09 np0005605476 podman[215957]: 2026-02-02 17:35:09.833975399 +0000 UTC m=+0.162127117 container start 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:35:09 np0005605476 podman[215957]: 2026-02-02 17:35:09.837397524 +0000 UTC m=+0.165549242 container attach 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]: {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    "0": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "devices": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "/dev/loop3"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            ],
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_name": "ceph_lv0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_size": "21470642176",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "name": "ceph_lv0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "tags": {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_name": "ceph",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.crush_device_class": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.encrypted": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.objectstore": "bluestore",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_id": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.vdo": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.with_tpm": "0"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            },
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "vg_name": "ceph_vg0"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        }
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    ],
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    "1": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "devices": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "/dev/loop4"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            ],
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_name": "ceph_lv1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_size": "21470642176",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "name": "ceph_lv1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "tags": {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_name": "ceph",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.crush_device_class": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.encrypted": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.objectstore": "bluestore",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_id": "1",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.vdo": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.with_tpm": "0"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            },
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "vg_name": "ceph_vg1"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        }
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    ],
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    "2": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "devices": [
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "/dev/loop5"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            ],
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_name": "ceph_lv2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_size": "21470642176",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "name": "ceph_lv2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "tags": {
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.cluster_name": "ceph",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.crush_device_class": "",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.encrypted": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.objectstore": "bluestore",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osd_id": "2",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.vdo": "0",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:                "ceph.with_tpm": "0"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            },
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "type": "block",
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:            "vg_name": "ceph_vg2"
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:        }
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]:    ]
Feb  2 12:35:10 np0005605476 relaxed_jennings[215997]: }
Feb  2 12:35:10 np0005605476 systemd[1]: libpod-1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da.scope: Deactivated successfully.
Feb  2 12:35:10 np0005605476 podman[215957]: 2026-02-02 17:35:10.135765016 +0000 UTC m=+0.463916704 container died 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:35:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-08bba378259db74b0b866337e964db90813330373befbf5945e9b2ecfccb85f8-merged.mount: Deactivated successfully.
Feb  2 12:35:10 np0005605476 podman[215957]: 2026-02-02 17:35:10.18510379 +0000 UTC m=+0.513255508 container remove 1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:35:10 np0005605476 systemd[1]: libpod-conmon-1fe66c736e35d296a9cd3afeeb43f5b5e814507b9ea8dbe9f4cbf99f897575da.scope: Deactivated successfully.
Feb  2 12:35:10 np0005605476 python3.9[216054]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.626822525 +0000 UTC m=+0.057839192 container create 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:35:10 np0005605476 systemd[1]: Started libpod-conmon-8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6.scope.
Feb  2 12:35:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.602169418 +0000 UTC m=+0.033186135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.701500555 +0000 UTC m=+0.132517202 container init 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.709031075 +0000 UTC m=+0.140047712 container start 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:35:10 np0005605476 fervent_mcnulty[216149]: 167 167
Feb  2 12:35:10 np0005605476 systemd[1]: libpod-8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6.scope: Deactivated successfully.
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.713466868 +0000 UTC m=+0.144483495 container attach 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.7138916 +0000 UTC m=+0.144908227 container died 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:35:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-abc018eab5782172e807c5a3e2e35fb4ec0739392b9180f4f2c77294213556cc-merged.mount: Deactivated successfully.
Feb  2 12:35:10 np0005605476 podman[216132]: 2026-02-02 17:35:10.743598808 +0000 UTC m=+0.174615445 container remove 8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:35:10 np0005605476 systemd[1]: libpod-conmon-8f2975a1b81d7db51b1230cbce055591366efd008ba2497fd394b7111d5929d6.scope: Deactivated successfully.
Feb  2 12:35:10 np0005605476 podman[216172]: 2026-02-02 17:35:10.880498121 +0000 UTC m=+0.035973473 container create b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:35:10 np0005605476 systemd[1]: Started libpod-conmon-b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a.scope.
Feb  2 12:35:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:35:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad2b23d1fd2b9869a95a6dade193c4db43cda070b9dd403f25461f4088cb514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad2b23d1fd2b9869a95a6dade193c4db43cda070b9dd403f25461f4088cb514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad2b23d1fd2b9869a95a6dade193c4db43cda070b9dd403f25461f4088cb514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:10 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad2b23d1fd2b9869a95a6dade193c4db43cda070b9dd403f25461f4088cb514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:35:10 np0005605476 podman[216172]: 2026-02-02 17:35:10.863364734 +0000 UTC m=+0.018840116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:35:10 np0005605476 podman[216172]: 2026-02-02 17:35:10.969345586 +0000 UTC m=+0.124820938 container init b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:35:10 np0005605476 podman[216172]: 2026-02-02 17:35:10.973604135 +0000 UTC m=+0.129079507 container start b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:35:10 np0005605476 podman[216172]: 2026-02-02 17:35:10.976873186 +0000 UTC m=+0.132348528 container attach b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb  2 12:35:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:11 np0005605476 lvm[216268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:35:11 np0005605476 lvm[216270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:35:11 np0005605476 lvm[216267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:35:11 np0005605476 lvm[216270]: VG ceph_vg2 finished
Feb  2 12:35:11 np0005605476 lvm[216268]: VG ceph_vg1 finished
Feb  2 12:35:11 np0005605476 lvm[216267]: VG ceph_vg0 finished
Feb  2 12:35:11 np0005605476 sad_keller[216189]: {}
Feb  2 12:35:11 np0005605476 systemd[1]: libpod-b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a.scope: Deactivated successfully.
Feb  2 12:35:11 np0005605476 systemd[1]: libpod-b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a.scope: Consumed 1.046s CPU time.
Feb  2 12:35:11 np0005605476 podman[216172]: 2026-02-02 17:35:11.71295523 +0000 UTC m=+0.868430572 container died b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:35:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bad2b23d1fd2b9869a95a6dade193c4db43cda070b9dd403f25461f4088cb514-merged.mount: Deactivated successfully.
Feb  2 12:35:11 np0005605476 podman[216172]: 2026-02-02 17:35:11.764193007 +0000 UTC m=+0.919668349 container remove b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_keller, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:35:11 np0005605476 systemd[1]: libpod-conmon-b229eba55aa22ca7cd09b0aacbca940ad6924fc01a85409d33b6c9144f4bce3a.scope: Deactivated successfully.
Feb  2 12:35:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:35:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:35:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:35:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:16 np0005605476 python3.9[216462]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:35:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:17 np0005605476 python3.9[216614]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:35:17 np0005605476 python3.9[216767]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:35:18 np0005605476 python3.9[216919]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:35:19 np0005605476 podman[217044]: 2026-02-02 17:35:19.013900127 +0000 UTC m=+0.074590599 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 12:35:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:19 np0005605476 python3.9[217090]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:35:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:19 np0005605476 python3.9[217215]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053718.6651576-90-87561427475600/.source.iscsi _original_basename=.xka1xk9z follow=False checksum=d56f5cba9c7206d1261b52381240a4fd1788fc0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:20 np0005605476 python3.9[217367]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:21 np0005605476 python3.9[217519]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:22 np0005605476 podman[217643]: 2026-02-02 17:35:22.12682279 +0000 UTC m=+0.066870674 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 12:35:22 np0005605476 python3.9[217689]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:35:22 np0005605476 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb  2 12:35:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:23 np0005605476 python3.9[217851]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:35:23 np0005605476 systemd[1]: Reloading.
Feb  2 12:35:23 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:35:23 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:35:23 np0005605476 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 12:35:23 np0005605476 systemd[1]: Starting Open-iSCSI...
Feb  2 12:35:23 np0005605476 kernel: Loading iSCSI transport class v2.0-870.
Feb  2 12:35:23 np0005605476 systemd[1]: Started Open-iSCSI.
Feb  2 12:35:23 np0005605476 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb  2 12:35:23 np0005605476 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb  2 12:35:24 np0005605476 python3.9[218051]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:35:24 np0005605476 network[218068]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:35:24 np0005605476 network[218069]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:35:24 np0005605476 network[218070]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:35:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:28 np0005605476 python3.9[218342]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:35:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:31 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:35:31 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:35:31 np0005605476 systemd[1]: Reloading.
Feb  2 12:35:31 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:35:31 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:35:31 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:35:31 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:35:31 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:35:31 np0005605476 systemd[1]: run-raf8445f2941940d6abe9f6dcf0fd9f41.service: Deactivated successfully.
Feb  2 12:35:32 np0005605476 python3.9[218658]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 12:35:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:33 np0005605476 python3.9[218810]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb  2 12:35:33 np0005605476 python3.9[218966]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:35:34 np0005605476 python3.9[219089]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053733.4487255-178-267736963550407/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:35 np0005605476 python3.9[219241]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:36 np0005605476 python3.9[219393]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:35:36 np0005605476 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 12:35:36 np0005605476 systemd[1]: Stopped Load Kernel Modules.
Feb  2 12:35:36 np0005605476 systemd[1]: Stopping Load Kernel Modules...
Feb  2 12:35:36 np0005605476 systemd[1]: Starting Load Kernel Modules...
Feb  2 12:35:36 np0005605476 systemd[1]: Finished Load Kernel Modules.
Feb  2 12:35:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:35:36
Feb  2 12:35:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:35:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:35:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Feb  2 12:35:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:35:36 np0005605476 python3.9[219549]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:35:37 np0005605476 python3.9[219702]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:35:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:35:38 np0005605476 python3.9[219854]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:35:38 np0005605476 python3.9[219977]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053737.7233136-229-246712172733321/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:39 np0005605476 python3.9[220129]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:35:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:39 np0005605476 python3.9[220282]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:40 np0005605476 python3.9[220434]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:41 np0005605476 python3.9[220586]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:41 np0005605476 python3.9[220738]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:42 np0005605476 python3.9[220890]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:42 np0005605476 python3.9[221042]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:43 np0005605476 python3.9[221194]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:43 np0005605476 python3.9[221346]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:35:44 np0005605476 python3.9[221500]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:35:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:45 np0005605476 python3.9[221653]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:35:45 np0005605476 systemd[1]: Listening on multipathd control socket.
Feb  2 12:35:46 np0005605476 python3.9[221809]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:35:46 np0005605476 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb  2 12:35:46 np0005605476 udevadm[221814]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb  2 12:35:46 np0005605476 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb  2 12:35:46 np0005605476 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 12:35:46 np0005605476 multipathd[221817]: --------start up--------
Feb  2 12:35:46 np0005605476 multipathd[221817]: read /etc/multipath.conf
Feb  2 12:35:46 np0005605476 multipathd[221817]: path checkers start up
Feb  2 12:35:46 np0005605476 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 12:35:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:35:46.623 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:35:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:35:46.624 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:35:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:35:46.624 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:35:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:35:47 np0005605476 python3.9[221976]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 12:35:48 np0005605476 python3.9[222128]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb  2 12:35:48 np0005605476 kernel: Key type psk registered
Feb  2 12:35:48 np0005605476 python3.9[222289]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:35:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:49 np0005605476 podman[222384]: 2026-02-02 17:35:49.338817912 +0000 UTC m=+0.064238811 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:35:49 np0005605476 python3.9[222431]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770053748.426822-359-182027049861053/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:50 np0005605476 python3.9[222583]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:50 np0005605476 python3.9[222735]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:35:51 np0005605476 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 12:35:51 np0005605476 systemd[1]: Stopped Load Kernel Modules.
Feb  2 12:35:51 np0005605476 systemd[1]: Stopping Load Kernel Modules...
Feb  2 12:35:51 np0005605476 systemd[1]: Starting Load Kernel Modules...
Feb  2 12:35:51 np0005605476 systemd[1]: Finished Load Kernel Modules.
Feb  2 12:35:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:51 np0005605476 python3.9[222891]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 12:35:52 np0005605476 podman[222893]: 2026-02-02 17:35:52.650849892 +0000 UTC m=+0.101342884 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:35:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:53 np0005605476 systemd[1]: Reloading.
Feb  2 12:35:53 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:35:53 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:35:53 np0005605476 systemd[1]: Reloading.
Feb  2 12:35:54 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:35:54 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:35:54 np0005605476 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 12:35:54 np0005605476 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 12:35:54 np0005605476 lvm[223027]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:35:54 np0005605476 lvm[223027]: VG ceph_vg1 finished
Feb  2 12:35:54 np0005605476 lvm[223028]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:35:54 np0005605476 lvm[223028]: VG ceph_vg0 finished
Feb  2 12:35:54 np0005605476 lvm[223031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:35:54 np0005605476 lvm[223031]: VG ceph_vg2 finished
Feb  2 12:35:54 np0005605476 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 12:35:54 np0005605476 systemd[1]: Starting man-db-cache-update.service...
Feb  2 12:35:54 np0005605476 systemd[1]: Reloading.
Feb  2 12:35:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:35:54 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:35:54 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:35:54 np0005605476 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 12:35:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:55 np0005605476 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 12:35:55 np0005605476 systemd[1]: Finished man-db-cache-update.service.
Feb  2 12:35:55 np0005605476 systemd[1]: man-db-cache-update.service: Consumed 1.221s CPU time.
Feb  2 12:35:55 np0005605476 systemd[1]: run-r746c0d3d2fa54b6c941e1597c8836984.service: Deactivated successfully.
Feb  2 12:35:56 np0005605476 python3.9[224387]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:35:56 np0005605476 systemd[1]: Stopping Open-iSCSI...
Feb  2 12:35:56 np0005605476 iscsid[217892]: iscsid shutting down.
Feb  2 12:35:56 np0005605476 systemd[1]: iscsid.service: Deactivated successfully.
Feb  2 12:35:56 np0005605476 systemd[1]: Stopped Open-iSCSI.
Feb  2 12:35:56 np0005605476 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 12:35:56 np0005605476 systemd[1]: Starting Open-iSCSI...
Feb  2 12:35:56 np0005605476 systemd[1]: Started Open-iSCSI.
Feb  2 12:35:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:57 np0005605476 python3.9[224543]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:35:57 np0005605476 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb  2 12:35:57 np0005605476 multipathd[221817]: exit (signal)
Feb  2 12:35:57 np0005605476 multipathd[221817]: --------shut down-------
Feb  2 12:35:57 np0005605476 systemd[1]: multipathd.service: Deactivated successfully.
Feb  2 12:35:57 np0005605476 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb  2 12:35:57 np0005605476 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 12:35:57 np0005605476 multipathd[224550]: --------start up--------
Feb  2 12:35:57 np0005605476 multipathd[224550]: read /etc/multipath.conf
Feb  2 12:35:57 np0005605476 multipathd[224550]: path checkers start up
Feb  2 12:35:57 np0005605476 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 12:35:58 np0005605476 python3.9[224707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 12:35:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:35:59 np0005605476 python3.9[224863]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:35:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:00 np0005605476 python3.9[225015]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:36:00 np0005605476 systemd[1]: Reloading.
Feb  2 12:36:00 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:36:00 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:36:00 np0005605476 python3.9[225200]: ansible-ansible.builtin.service_facts Invoked
Feb  2 12:36:00 np0005605476 network[225217]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 12:36:00 np0005605476 network[225218]: 'network-scripts' will be removed from distribution in near future.
Feb  2 12:36:00 np0005605476 network[225219]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 12:36:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:04 np0005605476 python3.9[225492]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:05 np0005605476 python3.9[225645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:05 np0005605476 python3.9[225798]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:06 np0005605476 python3.9[225951]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:07 np0005605476 python3.9[226104]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:07 np0005605476 python3.9[226257]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:08 np0005605476 python3.9[226410]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:09 np0005605476 python3.9[226563]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:36:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:10 np0005605476 python3.9[226716]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:10 np0005605476 python3.9[226868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:11 np0005605476 python3.9[227020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:11 np0005605476 python3.9[227172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:12 np0005605476 python3.9[227389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.770407117 +0000 UTC m=+0.033930914 container create fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:36:12 np0005605476 systemd[1]: Started libpod-conmon-fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5.scope.
Feb  2 12:36:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.823599277 +0000 UTC m=+0.087123094 container init fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.827854567 +0000 UTC m=+0.091378364 container start fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.830946625 +0000 UTC m=+0.094470422 container attach fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:36:12 np0005605476 interesting_heisenberg[227638]: 167 167
Feb  2 12:36:12 np0005605476 systemd[1]: libpod-fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5.scope: Deactivated successfully.
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.8328699 +0000 UTC m=+0.096393707 container died fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.754980909 +0000 UTC m=+0.018504726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0845655cc81114c9ab831997df8d7b63794f18653d6deac2559bde5488d4879b-merged.mount: Deactivated successfully.
Feb  2 12:36:12 np0005605476 podman[227621]: 2026-02-02 17:36:12.895628111 +0000 UTC m=+0.159151918 container remove fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:36:12 np0005605476 systemd[1]: libpod-conmon-fb381a2decbdbaa7a3a63fb8ab07a4d70248e24158e33c107626e1cba01f6fe5.scope: Deactivated successfully.
Feb  2 12:36:12 np0005605476 python3.9[227620]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.043803077 +0000 UTC m=+0.048498858 container create 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:36:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:13 np0005605476 systemd[1]: Started libpod-conmon-9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e.scope.
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.02065723 +0000 UTC m=+0.025353021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.147397297 +0000 UTC m=+0.152093178 container init 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.153279474 +0000 UTC m=+0.157975285 container start 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.157654478 +0000 UTC m=+0.162350469 container attach 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:36:13 np0005605476 priceless_shirley[227726]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:36:13 np0005605476 priceless_shirley[227726]: --> All data devices are unavailable
Feb  2 12:36:13 np0005605476 python3.9[227837]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:13 np0005605476 systemd[1]: libpod-9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e.scope: Deactivated successfully.
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.578095581 +0000 UTC m=+0.582791352 container died 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:36:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f50157e5af0ed331e7d82ad09b098f31d613e3f3e59741b662a67bde5945f56a-merged.mount: Deactivated successfully.
Feb  2 12:36:13 np0005605476 podman[227674]: 2026-02-02 17:36:13.61854873 +0000 UTC m=+0.623244521 container remove 9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:36:13 np0005605476 systemd[1]: libpod-conmon-9f5cbd39293c4a37084fd9e5cbc7104c89ce3eee884b4cfe1e7c86e96ce3c63e.scope: Deactivated successfully.
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.057244891 +0000 UTC m=+0.051757030 container create c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:36:14 np0005605476 systemd[1]: Started libpod-conmon-c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15.scope.
Feb  2 12:36:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.038110928 +0000 UTC m=+0.032623077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.148613355 +0000 UTC m=+0.143125474 container init c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.1554846 +0000 UTC m=+0.149996699 container start c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.159117903 +0000 UTC m=+0.153630002 container attach c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:36:14 np0005605476 peaceful_edison[228093]: 167 167
Feb  2 12:36:14 np0005605476 systemd[1]: libpod-c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15.scope: Deactivated successfully.
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.161029837 +0000 UTC m=+0.155541976 container died c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:36:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d2e7d8b0e3513891f3ca64823040cf93c89b390880874d4a6854b9925e5f90d0-merged.mount: Deactivated successfully.
Feb  2 12:36:14 np0005605476 podman[228075]: 2026-02-02 17:36:14.208288838 +0000 UTC m=+0.202800977 container remove c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_edison, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:36:14 np0005605476 systemd[1]: libpod-conmon-c6bfd578f9e36aaa9ef4ced5f87d604efee9b25ecd2712e7394abb29a82b6c15.scope: Deactivated successfully.
Feb  2 12:36:14 np0005605476 python3.9[228077]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.342859718 +0000 UTC m=+0.035466948 container create f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:36:14 np0005605476 systemd[1]: Started libpod-conmon-f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e.scope.
Feb  2 12:36:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c6fcef82510e895da2b517dcd66a4244c8e3f0b9fad5dd934e2e437056f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c6fcef82510e895da2b517dcd66a4244c8e3f0b9fad5dd934e2e437056f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c6fcef82510e895da2b517dcd66a4244c8e3f0b9fad5dd934e2e437056f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee0c6fcef82510e895da2b517dcd66a4244c8e3f0b9fad5dd934e2e437056f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.328000486 +0000 UTC m=+0.020607756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.424966408 +0000 UTC m=+0.117573658 container init f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.429867787 +0000 UTC m=+0.122475017 container start f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.432856232 +0000 UTC m=+0.125463462 container attach f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:36:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:14 np0005605476 naughty_jones[228158]: {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    "0": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "devices": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "/dev/loop3"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            ],
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_name": "ceph_lv0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_size": "21470642176",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "name": "ceph_lv0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "tags": {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_name": "ceph",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.crush_device_class": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.encrypted": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.objectstore": "bluestore",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_id": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.vdo": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.with_tpm": "0"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            },
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "vg_name": "ceph_vg0"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        }
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    ],
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    "1": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "devices": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "/dev/loop4"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            ],
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_name": "ceph_lv1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_size": "21470642176",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "name": "ceph_lv1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "tags": {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_name": "ceph",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.crush_device_class": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.encrypted": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.objectstore": "bluestore",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_id": "1",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.vdo": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.with_tpm": "0"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            },
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "vg_name": "ceph_vg1"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        }
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    ],
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    "2": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "devices": [
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "/dev/loop5"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            ],
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_name": "ceph_lv2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_size": "21470642176",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "name": "ceph_lv2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "tags": {
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.cluster_name": "ceph",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.crush_device_class": "",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.encrypted": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.objectstore": "bluestore",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osd_id": "2",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.vdo": "0",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:                "ceph.with_tpm": "0"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            },
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "type": "block",
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:            "vg_name": "ceph_vg2"
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:        }
Feb  2 12:36:14 np0005605476 naughty_jones[228158]:    ]
Feb  2 12:36:14 np0005605476 naughty_jones[228158]: }
Feb  2 12:36:14 np0005605476 systemd[1]: libpod-f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e.scope: Deactivated successfully.
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.762524309 +0000 UTC m=+0.455131539 container died f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:36:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8ee0c6fcef82510e895da2b517dcd66a4244c8e3f0b9fad5dd934e2e437056f8-merged.mount: Deactivated successfully.
Feb  2 12:36:14 np0005605476 podman[228142]: 2026-02-02 17:36:14.795775293 +0000 UTC m=+0.488382533 container remove f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:36:14 np0005605476 systemd[1]: libpod-conmon-f41031755fdb329d466144262a65fe59bbcdd78004cc394d7eb5591b38a17e0e.scope: Deactivated successfully.
Feb  2 12:36:14 np0005605476 python3.9[228295]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.219245632 +0000 UTC m=+0.055695801 container create 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:36:15 np0005605476 systemd[1]: Started libpod-conmon-4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0.scope.
Feb  2 12:36:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.276391124 +0000 UTC m=+0.112841303 container init 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.282817117 +0000 UTC m=+0.119267256 container start 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:36:15 np0005605476 systemd[1]: libpod-4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0.scope: Deactivated successfully.
Feb  2 12:36:15 np0005605476 cool_hellman[228489]: 167 167
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.286337057 +0000 UTC m=+0.122787196 container attach 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:36:15 np0005605476 conmon[228489]: conmon 4008af7bd91fbcfe1186 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0.scope/container/memory.events
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.286989465 +0000 UTC m=+0.123439604 container died 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.198153854 +0000 UTC m=+0.034604033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c2950a257ba703fb2a588595b1162867c2490700d556f95bbd417024d09bfef3-merged.mount: Deactivated successfully.
Feb  2 12:36:15 np0005605476 podman[228439]: 2026-02-02 17:36:15.324305554 +0000 UTC m=+0.160755693 container remove 4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:36:15 np0005605476 systemd[1]: libpod-conmon-4008af7bd91fbcfe1186a6c5b4b1b5845cd86ce29947db74b21269c9057e59c0.scope: Deactivated successfully.
Feb  2 12:36:15 np0005605476 podman[228562]: 2026-02-02 17:36:15.436085587 +0000 UTC m=+0.034681865 container create 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:36:15 np0005605476 systemd[1]: Started libpod-conmon-67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30.scope.
Feb  2 12:36:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:36:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e841a35ce905af83d8092213731b6de6d9097c99c264e14516b909148b8b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e841a35ce905af83d8092213731b6de6d9097c99c264e14516b909148b8b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e841a35ce905af83d8092213731b6de6d9097c99c264e14516b909148b8b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e841a35ce905af83d8092213731b6de6d9097c99c264e14516b909148b8b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:36:15 np0005605476 podman[228562]: 2026-02-02 17:36:15.508364278 +0000 UTC m=+0.106960576 container init 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:36:15 np0005605476 podman[228562]: 2026-02-02 17:36:15.514452901 +0000 UTC m=+0.113049189 container start 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:36:15 np0005605476 podman[228562]: 2026-02-02 17:36:15.421139543 +0000 UTC m=+0.019735901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:36:15 np0005605476 podman[228562]: 2026-02-02 17:36:15.517193749 +0000 UTC m=+0.115790027 container attach 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:36:15 np0005605476 python3.9[228556]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:16 np0005605476 python3.9[228773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:16 np0005605476 lvm[228809]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:36:16 np0005605476 lvm[228807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:36:16 np0005605476 lvm[228809]: VG ceph_vg1 finished
Feb  2 12:36:16 np0005605476 lvm[228807]: VG ceph_vg0 finished
Feb  2 12:36:16 np0005605476 lvm[228814]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:36:16 np0005605476 lvm[228814]: VG ceph_vg2 finished
Feb  2 12:36:16 np0005605476 awesome_shockley[228578]: {}
Feb  2 12:36:16 np0005605476 systemd[1]: libpod-67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30.scope: Deactivated successfully.
Feb  2 12:36:16 np0005605476 systemd[1]: libpod-67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30.scope: Consumed 1.117s CPU time.
Feb  2 12:36:16 np0005605476 podman[228562]: 2026-02-02 17:36:16.272841726 +0000 UTC m=+0.871438024 container died 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:36:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-317e841a35ce905af83d8092213731b6de6d9097c99c264e14516b909148b8b6-merged.mount: Deactivated successfully.
Feb  2 12:36:16 np0005605476 podman[228562]: 2026-02-02 17:36:16.335504984 +0000 UTC m=+0.934101262 container remove 67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shockley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:36:16 np0005605476 systemd[1]: libpod-conmon-67e76cbb13c1381f2a226244026263f33072b788bb1e7a9f23c6ebf1f05aef30.scope: Deactivated successfully.
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:36:16 np0005605476 python3.9[229001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:17 np0005605476 python3.9[229153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:17 np0005605476 python3.9[229305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:18 np0005605476 python3.9[229457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:18 np0005605476 python3.9[229609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:19 np0005605476 podman[229733]: 2026-02-02 17:36:19.503897842 +0000 UTC m=+0.071457260 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:36:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:19 np0005605476 python3.9[229780]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:20 np0005605476 python3.9[229933]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 12:36:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:21 np0005605476 python3.9[230085]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:36:21 np0005605476 systemd[1]: Reloading.
Feb  2 12:36:21 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:36:21 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:36:22 np0005605476 python3.9[230273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:22 np0005605476 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb  2 12:36:22 np0005605476 podman[230399]: 2026-02-02 17:36:22.824213022 +0000 UTC m=+0.118894805 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:36:22 np0005605476 python3.9[230441]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:23 np0005605476 python3.9[230607]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:23 np0005605476 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb  2 12:36:24 np0005605476 python3.9[230761]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:24 np0005605476 python3.9[230914]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:25 np0005605476 python3.9[231067]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:26 np0005605476 python3.9[231220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:26 np0005605476 python3.9[231373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 12:36:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:28 np0005605476 python3.9[231526]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:28 np0005605476 python3.9[231678]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:29 np0005605476 python3.9[231830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:30 np0005605476 python3.9[231982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:30 np0005605476 python3.9[232134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:31 np0005605476 python3.9[232286]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:32 np0005605476 python3.9[232438]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:32 np0005605476 python3.9[232590]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:32 np0005605476 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  2 12:36:32 np0005605476 systemd[1]: virtqemud.service: Deactivated successfully.
Feb  2 12:36:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:33 np0005605476 python3.9[232744]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:33 np0005605476 python3.9[232896]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:36:36
Feb  2 12:36:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:36:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:36:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.rgw.root']
Feb  2 12:36:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:36:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:36:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:39 np0005605476 python3.9[233048]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb  2 12:36:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:40 np0005605476 python3.9[233201]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 12:36:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:41 np0005605476 python3.9[233359]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 12:36:42 np0005605476 systemd-logind[799]: New session 50 of user zuul.
Feb  2 12:36:42 np0005605476 systemd[1]: Started Session 50 of User zuul.
Feb  2 12:36:42 np0005605476 systemd[1]: session-50.scope: Deactivated successfully.
Feb  2 12:36:42 np0005605476 systemd-logind[799]: Session 50 logged out. Waiting for processes to exit.
Feb  2 12:36:42 np0005605476 systemd-logind[799]: Removed session 50.
Feb  2 12:36:42 np0005605476 python3.9[233545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:43 np0005605476 python3.9[233666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053802.3420572-986-247342049554340/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:43 np0005605476 python3.9[233816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:44 np0005605476 python3.9[233892]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:44 np0005605476 python3.9[234042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:45 np0005605476 python3.9[234163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053804.3287675-986-53973793592790/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:45 np0005605476 python3.9[234313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:46 np0005605476 python3.9[234434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053805.4852471-986-104305085601829/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:36:46.624 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:36:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:36:46.625 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:36:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:36:46.625 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:47 np0005605476 python3.9[234584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:36:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:36:47 np0005605476 python3.9[234705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053806.726125-986-239488096796994/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:48 np0005605476 python3.9[234855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:48 np0005605476 python3.9[234976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053807.8111877-986-1692021327281/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:49 np0005605476 python3.9[235128]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:49 np0005605476 podman[235129]: 2026-02-02 17:36:49.630408895 +0000 UTC m=+0.077189872 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:36:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:50 np0005605476 python3.9[235299]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:36:50 np0005605476 python3.9[235451]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:36:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:51 np0005605476 python3.9[235603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:52 np0005605476 python3.9[235726]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1770053811.1043952-1093-247656675178198/.source _original_basename=.au0js1ii follow=False checksum=f031e26c49a57c3a2167991734608f46880dfd5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb  2 12:36:52 np0005605476 python3.9[235878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:36:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:53 np0005605476 podman[236004]: 2026-02-02 17:36:53.376663614 +0000 UTC m=+0.129496746 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Feb  2 12:36:53 np0005605476 python3.9[236043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:53 np0005605476 python3.9[236177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053812.9867547-1119-223584532344449/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:54 np0005605476 python3.9[236327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 12:36:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:36:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:55 np0005605476 python3.9[236448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770053814.1345034-1134-224409695567042/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 12:36:56 np0005605476 python3.9[236600]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb  2 12:36:57 np0005605476 python3.9[236752]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 12:36:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:58 np0005605476 python3[236904]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 12:36:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:36:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:06 np0005605476 podman[236917]: 2026-02-02 17:37:06.798514123 +0000 UTC m=+8.488518138 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 12:37:06 np0005605476 podman[237023]: 2026-02-02 17:37:06.947662186 +0000 UTC m=+0.057986046 container create 0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS)
Feb  2 12:37:06 np0005605476 podman[237023]: 2026-02-02 17:37:06.920603528 +0000 UTC m=+0.030927418 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 12:37:06 np0005605476 python3[236904]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:07 np0005605476 python3.9[237213]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:37:08 np0005605476 python3.9[237367]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb  2 12:37:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:09 np0005605476 python3.9[237519]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 12:37:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:10 np0005605476 python3[237671]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 12:37:10 np0005605476 podman[237706]: 2026-02-02 17:37:10.574820796 +0000 UTC m=+0.057034890 container create 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:10 np0005605476 podman[237706]: 2026-02-02 17:37:10.546927954 +0000 UTC m=+0.029142158 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 12:37:10 np0005605476 python3[237671]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb  2 12:37:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:11 np0005605476 python3.9[237895]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:37:12 np0005605476 python3.9[238049]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:37:12 np0005605476 python3.9[238201]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770053832.2904665-1230-85307982724888/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 12:37:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:13 np0005605476 python3.9[238277]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 12:37:13 np0005605476 systemd[1]: Reloading.
Feb  2 12:37:13 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:37:13 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:37:14 np0005605476 python3.9[238388]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 12:37:14 np0005605476 systemd[1]: Reloading.
Feb  2 12:37:14 np0005605476 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 12:37:14 np0005605476 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 12:37:14 np0005605476 systemd[1]: Starting nova_compute container...
Feb  2 12:37:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:14 np0005605476 podman[238428]: 2026-02-02 17:37:14.816819415 +0000 UTC m=+0.147957216 container init 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=edpm)
Feb  2 12:37:14 np0005605476 podman[238428]: 2026-02-02 17:37:14.824880735 +0000 UTC m=+0.156018466 container start 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:37:14 np0005605476 podman[238428]: nova_compute
Feb  2 12:37:14 np0005605476 nova_compute[238443]: + sudo -E kolla_set_configs
Feb  2 12:37:14 np0005605476 systemd[1]: Started nova_compute container.
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Validating config file
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying service configuration files
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Deleting /etc/ceph
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Creating directory /etc/ceph
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Writing out command to execute
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:14 np0005605476 nova_compute[238443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 12:37:14 np0005605476 nova_compute[238443]: ++ cat /run_command
Feb  2 12:37:14 np0005605476 nova_compute[238443]: + CMD=nova-compute
Feb  2 12:37:14 np0005605476 nova_compute[238443]: + ARGS=
Feb  2 12:37:14 np0005605476 nova_compute[238443]: + sudo kolla_copy_cacerts
Feb  2 12:37:15 np0005605476 nova_compute[238443]: + [[ ! -n '' ]]
Feb  2 12:37:15 np0005605476 nova_compute[238443]: + . kolla_extend_start
Feb  2 12:37:15 np0005605476 nova_compute[238443]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 12:37:15 np0005605476 nova_compute[238443]: Running command: 'nova-compute'
Feb  2 12:37:15 np0005605476 nova_compute[238443]: + umask 0022
Feb  2 12:37:15 np0005605476 nova_compute[238443]: + exec nova-compute
Feb  2 12:37:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb  2 12:37:15 np0005605476 python3.9[238604]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:37:16 np0005605476 python3.9[238755]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:37:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 12:37:17 np0005605476 python3.9[238985]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.270 238447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.271 238447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.272 238447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.272 238447 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.397 238447 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.415 238447 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.416 238447 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.486167666 +0000 UTC m=+0.048645617 container create 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:37:17 np0005605476 systemd[1]: Started libpod-conmon-647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4.scope.
Feb  2 12:37:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.467868195 +0000 UTC m=+0.030346176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.565659161 +0000 UTC m=+0.128137142 container init 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.571761335 +0000 UTC m=+0.134239316 container start 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.575410409 +0000 UTC m=+0.137888360 container attach 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:37:17 np0005605476 vigorous_greider[239143]: 167 167
Feb  2 12:37:17 np0005605476 systemd[1]: libpod-647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4.scope: Deactivated successfully.
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.577519709 +0000 UTC m=+0.139997700 container died 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:37:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-81590af96c3ab8edbfa851d7518acd29fed019f5577f4555cbacaa5a63493f7b-merged.mount: Deactivated successfully.
Feb  2 12:37:17 np0005605476 podman[239095]: 2026-02-02 17:37:17.618612659 +0000 UTC m=+0.181090600 container remove 647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:37:17 np0005605476 systemd[1]: libpod-conmon-647d6d6c95593bc881c97b1b8ace545820736ab5d895566412834705f70378c4.scope: Deactivated successfully.
Feb  2 12:37:17 np0005605476 podman[239170]: 2026-02-02 17:37:17.795361825 +0000 UTC m=+0.047108293 container create 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:37:17 np0005605476 systemd[1]: Started libpod-conmon-4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a.scope.
Feb  2 12:37:17 np0005605476 podman[239170]: 2026-02-02 17:37:17.773911224 +0000 UTC m=+0.025657692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:37:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:37:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:17 np0005605476 podman[239170]: 2026-02-02 17:37:17.902456196 +0000 UTC m=+0.154202644 container init 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:37:17 np0005605476 podman[239170]: 2026-02-02 17:37:17.913243534 +0000 UTC m=+0.164990002 container start 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:37:17 np0005605476 podman[239170]: 2026-02-02 17:37:17.916696182 +0000 UTC m=+0.168442610 container attach 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:37:17 np0005605476 nova_compute[238443]: 2026-02-02 17:37:17.939 238447 INFO nova.virt.driver [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.102 238447 INFO nova.compute.provider_config [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.118 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.118 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.118 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.119 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.120 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.120 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.120 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.120 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.120 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.121 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.121 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.121 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.121 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.121 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.122 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.123 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.123 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.123 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.123 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.123 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.124 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.124 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.124 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.124 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.125 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.125 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.125 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.125 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.125 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.126 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.126 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.126 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.126 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.126 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.127 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.127 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.127 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.127 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.127 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.128 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.128 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.128 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.128 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.128 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.129 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.129 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.129 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.129 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.129 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.130 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.131 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.132 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.132 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.132 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.132 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.132 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.133 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.133 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.133 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.133 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.133 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.134 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.135 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.135 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.135 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.135 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.135 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.136 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.136 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.136 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.136 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.136 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.137 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.138 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.138 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.138 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.138 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.138 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.139 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.140 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.140 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.140 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.140 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.140 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.141 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.142 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.142 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.142 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.142 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.142 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.143 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.144 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.144 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.144 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.144 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.144 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.145 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.145 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.145 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.145 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.145 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.146 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.147 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.147 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.147 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.147 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.147 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.148 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.148 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.148 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.148 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.148 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.149 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.149 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.149 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.149 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.149 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.150 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.151 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.152 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.153 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.154 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.155 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.156 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.157 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.158 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.159 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.160 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.161 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.162 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.163 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.164 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.165 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.166 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.167 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.168 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.169 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.170 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.171 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.172 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.173 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.174 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.175 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.176 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.177 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.178 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.179 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.180 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.181 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.182 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.183 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.184 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.185 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.186 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.187 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.188 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.190 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.190 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.190 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.190 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.191 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.191 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.191 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.191 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.191 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.192 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.192 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.192 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.192 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.192 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.193 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.193 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.193 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.193 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.194 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.195 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.196 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.197 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.198 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 WARNING oslo_config.cfg [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 12:37:18 np0005605476 nova_compute[238443]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 12:37:18 np0005605476 nova_compute[238443]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 12:37:18 np0005605476 nova_compute[238443]: and ``live_migration_inbound_addr`` respectively.
Feb  2 12:37:18 np0005605476 nova_compute[238443]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.199 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.200 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.201 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rbd_secret_uuid        = eb48d0ef-3496-563c-b73d-661fb962013e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.202 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.203 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.204 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.205 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 python3.9[239262]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.206 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.207 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.208 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.209 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.210 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.211 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.212 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.213 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.214 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.215 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.216 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.217 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.218 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.219 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.220 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.221 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.222 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.223 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.224 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.225 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.226 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.227 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.228 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.229 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.230 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.231 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.232 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.233 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.234 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.235 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.236 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.237 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.238 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.239 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.240 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.241 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.242 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.243 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.244 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.245 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.246 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.247 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.248 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.249 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.250 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.251 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.252 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.253 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.254 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.255 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.256 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.257 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.258 238447 DEBUG oslo_service.service [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.259 238447 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 12:37:18 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:37:18 np0005605476 suspicious_shirley[239223]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:37:18 np0005605476 suspicious_shirley[239223]: --> All data devices are unavailable
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.272 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.272 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.272 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.273 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 12:37:18 np0005605476 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 12:37:18 np0005605476 systemd[1]: libpod-4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a.scope: Deactivated successfully.
Feb  2 12:37:18 np0005605476 podman[239170]: 2026-02-02 17:37:18.295035321 +0000 UTC m=+0.546781749 container died 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:37:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-45c8f9c0ede8f76307306907e97314819130d1bdf481d1ea76608318a437675f-merged.mount: Deactivated successfully.
Feb  2 12:37:18 np0005605476 systemd[1]: Started libvirt QEMU daemon.
Feb  2 12:37:18 np0005605476 podman[239170]: 2026-02-02 17:37:18.335533725 +0000 UTC m=+0.587280153 container remove 4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:37:18 np0005605476 systemd[1]: libpod-conmon-4d9020951b1b7952b5be4e5b5e6d6b3e7f31ebcc381eba9ed79f7ac1c0251e1a.scope: Deactivated successfully.
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.343 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f48ca9965b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.345 238447 DEBUG nova.virt.libvirt.host [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f48ca9965b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.346 238447 INFO nova.virt.libvirt.driver [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.359 238447 WARNING nova.virt.libvirt.driver [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 12:37:18 np0005605476 nova_compute[238443]: 2026-02-02 17:37:18.360 238447 DEBUG nova.virt.libvirt.volume.mount [None req-33119f80-a9fb-4c5b-b8dd-db443a2a8bd5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.757637141 +0000 UTC m=+0.047285368 container create 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:37:18 np0005605476 systemd[1]: Started libpod-conmon-4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421.scope.
Feb  2 12:37:18 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.737507368 +0000 UTC m=+0.027155625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.839425661 +0000 UTC m=+0.129073918 container init 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.847636775 +0000 UTC m=+0.137285022 container start 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.851432483 +0000 UTC m=+0.141080740 container attach 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:37:18 np0005605476 upbeat_shaw[239591]: 167 167
Feb  2 12:37:18 np0005605476 systemd[1]: libpod-4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421.scope: Deactivated successfully.
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.854644245 +0000 UTC m=+0.144292502 container died 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:37:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b904008b8e88616c727baee2905735b005d859c374f249b1d0c743b0ed22fa81-merged.mount: Deactivated successfully.
Feb  2 12:37:18 np0005605476 podman[239543]: 2026-02-02 17:37:18.901610043 +0000 UTC m=+0.191258290 container remove 4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:18 np0005605476 systemd[1]: libpod-conmon-4f36b3a14b31d8ae20694655b64f7b3cecea836366381346575698abb67ab421.scope: Deactivated successfully.
Feb  2 12:37:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 12:37:19 np0005605476 podman[239626]: 2026-02-02 17:37:19.085918984 +0000 UTC m=+0.050654094 container create 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:37:19 np0005605476 python3.9[239593]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 12:37:19 np0005605476 systemd[1]: Started libpod-conmon-32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf.scope.
Feb  2 12:37:19 np0005605476 podman[239626]: 2026-02-02 17:37:19.061317593 +0000 UTC m=+0.026052723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:19 np0005605476 systemd[1]: Stopping nova_compute container...
Feb  2 12:37:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f28c77d7497c67ae4561053b62841cf88d79b991511c28b66881dc468d3926/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f28c77d7497c67ae4561053b62841cf88d79b991511c28b66881dc468d3926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f28c77d7497c67ae4561053b62841cf88d79b991511c28b66881dc468d3926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f28c77d7497c67ae4561053b62841cf88d79b991511c28b66881dc468d3926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:19 np0005605476 podman[239626]: 2026-02-02 17:37:19.199952083 +0000 UTC m=+0.164687243 container init 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:37:19 np0005605476 podman[239626]: 2026-02-02 17:37:19.209808914 +0000 UTC m=+0.174544024 container start 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:37:19 np0005605476 podman[239626]: 2026-02-02 17:37:19.216408842 +0000 UTC m=+0.181143972 container attach 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:37:19 np0005605476 nova_compute[238443]: 2026-02-02 17:37:19.219 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:37:19 np0005605476 nova_compute[238443]: 2026-02-02 17:37:19.220 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:37:19 np0005605476 nova_compute[238443]: 2026-02-02 17:37:19.221 238447 DEBUG oslo_concurrency.lockutils [None req-b57e52ca-4cd8-4205-a73a-5a917a96507c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]: {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    "0": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "devices": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "/dev/loop3"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            ],
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_name": "ceph_lv0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_size": "21470642176",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "name": "ceph_lv0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "tags": {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_name": "ceph",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.crush_device_class": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.encrypted": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.objectstore": "bluestore",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_id": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.vdo": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.with_tpm": "0"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            },
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "vg_name": "ceph_vg0"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        }
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    ],
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    "1": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "devices": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "/dev/loop4"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            ],
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_name": "ceph_lv1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_size": "21470642176",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "name": "ceph_lv1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "tags": {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_name": "ceph",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.crush_device_class": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.encrypted": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.objectstore": "bluestore",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_id": "1",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.vdo": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.with_tpm": "0"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            },
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "vg_name": "ceph_vg1"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        }
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    ],
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    "2": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "devices": [
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "/dev/loop5"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            ],
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_name": "ceph_lv2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_size": "21470642176",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "name": "ceph_lv2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "tags": {
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.cluster_name": "ceph",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.crush_device_class": "",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.encrypted": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.objectstore": "bluestore",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osd_id": "2",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.vdo": "0",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:                "ceph.with_tpm": "0"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            },
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "type": "block",
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:            "vg_name": "ceph_vg2"
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:        }
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]:    ]
Feb  2 12:37:19 np0005605476 wonderful_chaplygin[239645]: }
Feb  2 12:37:19 np0005605476 systemd[1]: libpod-32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf.scope: Deactivated successfully.
Feb  2 12:37:19 np0005605476 podman[239667]: 2026-02-02 17:37:19.550873271 +0000 UTC m=+0.040435913 container died 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:37:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-28f28c77d7497c67ae4561053b62841cf88d79b991511c28b66881dc468d3926-merged.mount: Deactivated successfully.
Feb  2 12:37:19 np0005605476 podman[239667]: 2026-02-02 17:37:19.595032779 +0000 UTC m=+0.084595381 container remove 32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:37:19 np0005605476 systemd[1]: libpod-conmon-32f5a0f86f2f332a3f6fc4d8b83c6bd332e6b763d8de884b2742c844cd5d9cbf.scope: Deactivated successfully.
Feb  2 12:37:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:19 np0005605476 virtqemud[239321]: libvirt version: 11.10.0, package: 3.el9 (builder@centos.org, 2026-01-13-15:14:57, )
Feb  2 12:37:19 np0005605476 virtqemud[239321]: hostname: compute-0
Feb  2 12:37:19 np0005605476 conmon[238443]: conmon 17e8fd462ba5e72ae343 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681.scope/container/memory.events
Feb  2 12:37:19 np0005605476 virtqemud[239321]: End of file while reading data: Input/output error
Feb  2 12:37:19 np0005605476 systemd[1]: libpod-17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681.scope: Deactivated successfully.
Feb  2 12:37:19 np0005605476 systemd[1]: libpod-17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681.scope: Consumed 2.906s CPU time.
Feb  2 12:37:19 np0005605476 podman[239650]: 2026-02-02 17:37:19.802792138 +0000 UTC m=+0.618501832 container died 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute)
Feb  2 12:37:19 np0005605476 podman[239706]: 2026-02-02 17:37:19.821866032 +0000 UTC m=+0.069462030 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 12:37:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681-userdata-shm.mount: Deactivated successfully.
Feb  2 12:37:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1-merged.mount: Deactivated successfully.
Feb  2 12:37:20 np0005605476 podman[239650]: 2026-02-02 17:37:20.955294483 +0000 UTC m=+1.771004217 container cleanup 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:37:20 np0005605476 podman[239650]: nova_compute
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.030033483 +0000 UTC m=+0.057314394 container create f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:37:21 np0005605476 podman[239787]: nova_compute
Feb  2 12:37:21 np0005605476 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb  2 12:37:21 np0005605476 systemd[1]: Stopped nova_compute container.
Feb  2 12:37:21 np0005605476 systemd[1]: Started libpod-conmon-f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5.scope.
Feb  2 12:37:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 12:37:21 np0005605476 systemd[1]: Starting nova_compute container...
Feb  2 12:37:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.005484163 +0000 UTC m=+0.032765124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.111571486 +0000 UTC m=+0.138852407 container init f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.122701043 +0000 UTC m=+0.149981944 container start f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.12683628 +0000 UTC m=+0.154117201 container attach f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:37:21 np0005605476 hardcore_mcclintock[239814]: 167 167
Feb  2 12:37:21 np0005605476 systemd[1]: libpod-f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5.scope: Deactivated successfully.
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.128780506 +0000 UTC m=+0.156061397 container died f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:37:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3011e56c43fcd81bc9f8f93fb47a35ae83fba24de1c9caa1313c71923594f095-merged.mount: Deactivated successfully.
Feb  2 12:37:21 np0005605476 podman[239785]: 2026-02-02 17:37:21.181960671 +0000 UTC m=+0.209241552 container remove f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:37:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:21 np0005605476 systemd[1]: libpod-conmon-f54d840bcc94a10063c6155df64e1a87b5471f37d75a7daefa00b26c21cf62f5.scope: Deactivated successfully.
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b201a78fc33ffd5ecf252499217645a7a52199eb93133891737b3f11b2b3c1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 podman[239816]: 2026-02-02 17:37:21.229143995 +0000 UTC m=+0.132969729 container init 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:37:21 np0005605476 podman[239816]: 2026-02-02 17:37:21.236767262 +0000 UTC m=+0.140592976 container start 17e8fd462ba5e72ae3430209748fcdbe5242ab00cbe0d2070f3bc042ab5b9681 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:21 np0005605476 podman[239816]: nova_compute
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + sudo -E kolla_set_configs
Feb  2 12:37:21 np0005605476 systemd[1]: Started nova_compute container.
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Validating config file
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying service configuration files
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /etc/ceph
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Creating directory /etc/ceph
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Writing out command to execute
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:21 np0005605476 nova_compute[239846]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 12:37:21 np0005605476 nova_compute[239846]: ++ cat /run_command
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + CMD=nova-compute
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + ARGS=
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + sudo kolla_copy_cacerts
Feb  2 12:37:21 np0005605476 podman[239862]: 2026-02-02 17:37:21.339740916 +0000 UTC m=+0.038813967 container create 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + [[ ! -n '' ]]
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + . kolla_extend_start
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 12:37:21 np0005605476 nova_compute[239846]: Running command: 'nova-compute'
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + umask 0022
Feb  2 12:37:21 np0005605476 nova_compute[239846]: + exec nova-compute
Feb  2 12:37:21 np0005605476 systemd[1]: Started libpod-conmon-751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138.scope.
Feb  2 12:37:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e567dc6438babad83be7e5873535cd54f824a3e90ac116558021e82be89b303/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e567dc6438babad83be7e5873535cd54f824a3e90ac116558021e82be89b303/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e567dc6438babad83be7e5873535cd54f824a3e90ac116558021e82be89b303/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e567dc6438babad83be7e5873535cd54f824a3e90ac116558021e82be89b303/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:21 np0005605476 podman[239862]: 2026-02-02 17:37:21.403578895 +0000 UTC m=+0.102652036 container init 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:21 np0005605476 podman[239862]: 2026-02-02 17:37:21.408560637 +0000 UTC m=+0.107633708 container start 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:37:21 np0005605476 podman[239862]: 2026-02-02 17:37:21.411920833 +0000 UTC m=+0.110993984 container attach 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:21 np0005605476 podman[239862]: 2026-02-02 17:37:21.325242543 +0000 UTC m=+0.024315614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:37:21 np0005605476 python3.9[240067]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 12:37:21 np0005605476 lvm[240113]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:37:21 np0005605476 lvm[240113]: VG ceph_vg0 finished
Feb  2 12:37:21 np0005605476 lvm[240114]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:37:21 np0005605476 lvm[240114]: VG ceph_vg1 finished
Feb  2 12:37:22 np0005605476 lvm[240123]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:37:22 np0005605476 lvm[240123]: VG ceph_vg2 finished
Feb  2 12:37:22 np0005605476 focused_fermi[239907]: {}
Feb  2 12:37:22 np0005605476 podman[239862]: 2026-02-02 17:37:22.118543655 +0000 UTC m=+0.817616716 container died 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:37:22 np0005605476 systemd[1]: Started libpod-conmon-0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d.scope.
Feb  2 12:37:22 np0005605476 systemd[1]: libpod-751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138.scope: Deactivated successfully.
Feb  2 12:37:22 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:37:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7e567dc6438babad83be7e5873535cd54f824a3e90ac116558021e82be89b303-merged.mount: Deactivated successfully.
Feb  2 12:37:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3597b6a14441a920549d7a1f624028463c8146dcc3f496128af5b1509281b90e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3597b6a14441a920549d7a1f624028463c8146dcc3f496128af5b1509281b90e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3597b6a14441a920549d7a1f624028463c8146dcc3f496128af5b1509281b90e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 12:37:22 np0005605476 podman[239862]: 2026-02-02 17:37:22.158948496 +0000 UTC m=+0.858021567 container remove 751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:37:22 np0005605476 systemd[1]: libpod-conmon-751eff13191543db43b9aaa7d3c6fb6be72154f4a93e4301ab73e5a1f5068138.scope: Deactivated successfully.
Feb  2 12:37:22 np0005605476 podman[240141]: 2026-02-02 17:37:22.173356027 +0000 UTC m=+0.118273541 container init 0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:37:22 np0005605476 podman[240141]: 2026-02-02 17:37:22.181494438 +0000 UTC m=+0.126411862 container start 0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 12:37:22 np0005605476 python3.9[240067]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb  2 12:37:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:37:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:37:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Applying nova statedir ownership
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb  2 12:37:22 np0005605476 nova_compute_init[240173]: INFO:nova_statedir:Nova statedir ownership complete
Feb  2 12:37:22 np0005605476 systemd[1]: libpod-0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d.scope: Deactivated successfully.
Feb  2 12:37:22 np0005605476 podman[240200]: 2026-02-02 17:37:22.291922294 +0000 UTC m=+0.024628572 container died 0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d-userdata-shm.mount: Deactivated successfully.
Feb  2 12:37:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3597b6a14441a920549d7a1f624028463c8146dcc3f496128af5b1509281b90e-merged.mount: Deactivated successfully.
Feb  2 12:37:22 np0005605476 podman[240200]: 2026-02-02 17:37:22.327286512 +0000 UTC m=+0.059992770 container cleanup 0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 12:37:22 np0005605476 systemd[1]: libpod-conmon-0582d1d0c8fdcdc0e8c527804f29d74b9807dff1113f9810ea5f48c970c4a71d.scope: Deactivated successfully.
Feb  2 12:37:22 np0005605476 systemd[1]: session-49.scope: Deactivated successfully.
Feb  2 12:37:22 np0005605476 systemd[1]: session-49.scope: Consumed 1min 47.613s CPU time.
Feb  2 12:37:22 np0005605476 systemd-logind[799]: Session 49 logged out. Waiting for processes to exit.
Feb  2 12:37:22 np0005605476 systemd-logind[799]: Removed session 49.
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.081 239853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.081 239853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.081 239853 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.082 239853 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.211 239853 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:37:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.232 239853 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.232 239853 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 12:37:23 np0005605476 podman[240267]: 2026-02-02 17:37:23.658995094 +0000 UTC m=+0.097151579 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.715 239853 INFO nova.virt.driver [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.834 239853 INFO nova.compute.provider_config [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.850 239853 DEBUG oslo_concurrency.lockutils [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.850 239853 DEBUG oslo_concurrency.lockutils [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.851 239853 DEBUG oslo_concurrency.lockutils [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.851 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.851 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.851 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.852 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.853 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.853 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.853 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.853 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.853 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.854 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.854 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.854 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.854 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.854 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.855 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.855 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.855 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.855 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.855 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.856 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.856 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.856 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.856 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.856 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.857 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.857 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.857 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.857 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.857 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.858 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.858 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.858 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.858 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.858 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.859 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.859 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.859 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.859 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.860 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.860 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.860 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.860 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.861 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.861 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.861 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.861 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.862 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.862 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.862 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.862 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.862 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.863 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.863 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.863 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.863 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.863 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.864 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.865 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.865 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.865 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.865 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.865 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.866 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.866 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.866 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.866 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.866 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.867 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.867 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.867 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.867 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.867 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.868 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.868 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.868 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.868 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.868 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.869 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.869 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.869 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.869 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.869 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.870 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.870 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.870 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.870 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.870 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.871 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.871 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.871 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.871 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.871 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.872 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.872 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.872 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.872 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.872 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.873 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.873 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.873 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.873 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.874 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.875 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.875 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.875 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.875 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.875 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.876 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.876 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.876 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.876 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.876 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.877 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.878 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.879 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.880 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.881 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.882 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.883 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.884 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.885 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.886 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.887 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.888 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.889 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.890 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.891 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.892 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.893 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.894 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.895 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.896 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.897 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.898 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.899 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.900 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.901 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.902 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.903 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.904 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.905 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.906 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.907 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.908 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.909 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.910 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.911 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.911 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.911 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.911 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.911 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.912 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.913 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.914 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.915 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.916 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.917 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.918 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.919 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.920 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.921 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.922 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.923 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.924 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.925 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.926 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.927 239853 WARNING oslo_config.cfg [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 12:37:23 np0005605476 nova_compute[239846]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 12:37:23 np0005605476 nova_compute[239846]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 12:37:23 np0005605476 nova_compute[239846]: and ``live_migration_inbound_addr`` respectively.
Feb  2 12:37:23 np0005605476 nova_compute[239846]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.928 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.929 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.930 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rbd_secret_uuid        = eb48d0ef-3496-563c-b73d-661fb962013e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.931 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.932 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.933 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.934 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.935 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.936 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.937 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.938 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.939 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.940 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.941 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.942 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.943 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.944 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.945 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.946 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.947 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.948 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.948 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.948 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.948 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.948 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.949 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.950 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.950 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.950 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.950 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.950 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.951 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.952 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.953 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.954 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.954 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.954 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.954 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.955 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.956 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.957 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.958 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.959 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.960 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.961 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.962 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.963 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.964 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.965 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.966 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.967 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.968 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.969 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.970 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.971 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.972 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.973 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.974 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.975 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.976 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.977 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.978 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.979 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.980 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.981 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.982 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.983 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.984 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.985 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.986 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.987 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.988 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.988 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.988 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.988 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.988 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.989 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.990 239853 DEBUG oslo_service.service [None req-bb19708a-09ce-400f-b9fe-9c40ca32d93a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 12:37:23 np0005605476 nova_compute[239846]: 2026-02-02 17:37:23.991 239853 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.003 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.003 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.004 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.004 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.017 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f83a5e48190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.019 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f83a5e48190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.020 239853 INFO nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.023 239853 INFO nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Libvirt host capabilities <capabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <host>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <uuid>cb1779c6-d1fa-4b89-a494-cd579a1210f6</uuid>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <arch>x86_64</arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model>EPYC-Rome-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <vendor>AMD</vendor>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <microcode version='16777317'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <signature family='23' model='49' stepping='0'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <maxphysaddr mode='emulate' bits='40'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='x2apic'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='tsc-deadline'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='osxsave'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='hypervisor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='tsc_adjust'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='spec-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='stibp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='arch-capabilities'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='cmp_legacy'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='topoext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='virt-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='lbrv'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='tsc-scale'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='vmcb-clean'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='pause-filter'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='pfthreshold'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='svme-addr-chk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='rdctl-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='skip-l1dfl-vmentry'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='mds-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature name='pschange-mc-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <pages unit='KiB' size='4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <pages unit='KiB' size='2048'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <pages unit='KiB' size='1048576'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <power_management>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <suspend_mem/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </power_management>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <iommu support='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <migration_features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <live/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <uri_transports>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <uri_transport>tcp</uri_transport>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <uri_transport>rdma</uri_transport>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </uri_transports>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </migration_features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <topology>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <cells num='1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <cell id='0'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <memory unit='KiB'>7864288</memory>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <pages unit='KiB' size='4'>1966072</pages>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <pages unit='KiB' size='2048'>0</pages>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <pages unit='KiB' size='1048576'>0</pages>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <distances>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <sibling id='0' value='10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          </distances>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          <cpus num='8'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:          </cpus>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        </cell>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </cells>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </topology>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <cache>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </cache>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <secmodel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model>selinux</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <doi>0</doi>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </secmodel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <secmodel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model>dac</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <doi>0</doi>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </secmodel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </host>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <guest>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <os_type>hvm</os_type>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <arch name='i686'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <wordsize>32</wordsize>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <domain type='qemu'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <domain type='kvm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <pae/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <nonpae/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <acpi default='on' toggle='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <apic default='on' toggle='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <cpuselection/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <deviceboot/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <disksnapshot default='on' toggle='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <externalSnapshot/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </guest>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <guest>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <os_type>hvm</os_type>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <arch name='x86_64'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <wordsize>64</wordsize>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <domain type='qemu'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <domain type='kvm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <acpi default='on' toggle='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <apic default='on' toggle='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <cpuselection/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <deviceboot/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <disksnapshot default='on' toggle='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <externalSnapshot/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </guest>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 
Feb  2 12:37:24 np0005605476 nova_compute[239846]: </capabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: #033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.029 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.036 239853 WARNING nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.036 239853 DEBUG nova.virt.libvirt.volume.mount [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.078 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb  2 12:37:24 np0005605476 nova_compute[239846]: <domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <domain>kvm</domain>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <arch>i686</arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <vcpu max='240'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <iothreads supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <os supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='firmware'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <loader supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>rom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pflash</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='readonly'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>yes</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='secure'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </loader>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-passthrough' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='hostPassthroughMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='maximum' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='maximumMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-model' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <vendor>AMD</vendor>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='x2apic'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='hypervisor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='stibp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='overflow-recov'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='succor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lbrv'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-scale'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='flushbyasid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pause-filter'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pfthreshold'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='disable' name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='custom' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Dhyana-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v6'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v7'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <memoryBacking supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='sourceType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>anonymous</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>memfd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </memoryBacking>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <disk supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='diskDevice'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>disk</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cdrom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>floppy</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>lun</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ide</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>fdc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>sata</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <graphics supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vnc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egl-headless</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </graphics>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <video supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='modelType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vga</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cirrus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>none</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>bochs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ramfb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hostdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='mode'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>subsystem</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='startupPolicy'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>mandatory</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>requisite</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>optional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='subsysType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pci</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='capsType'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='pciBackend'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hostdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <rng supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>random</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <filesystem supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='driverType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>path</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>handle</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtiofs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </filesystem>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tpm supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-tis</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-crb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emulator</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>external</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendVersion'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>2.0</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </tpm>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <redirdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </redirdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <channel supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </channel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <crypto supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </crypto>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <interface supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>passt</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <panic supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>isa</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>hyperv</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </panic>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <console supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>null</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dev</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pipe</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stdio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>udp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tcp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu-vdagent</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </console>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <gic supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <vmcoreinfo supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <genid supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backingStoreInput supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backup supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <async-teardown supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <s390-pv supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <ps2 supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tdx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sev supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sgx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hyperv supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='features'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>relaxed</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vapic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>spinlocks</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vpindex</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>runtime</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>synic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stimer</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reset</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vendor_id</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>frequencies</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reenlightenment</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tlbflush</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ipi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>avic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emsr_bitmap</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>xmm_input</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <spinlocks>4095</spinlocks>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <stimer_direct>on</stimer_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hyperv>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <launchSecurity supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: </domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.085 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb  2 12:37:24 np0005605476 nova_compute[239846]: <domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <domain>kvm</domain>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <arch>i686</arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <vcpu max='4096'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <iothreads supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <os supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='firmware'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <loader supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>rom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pflash</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='readonly'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>yes</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='secure'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </loader>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-passthrough' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='hostPassthroughMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='maximum' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='maximumMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-model' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <vendor>AMD</vendor>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='x2apic'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='hypervisor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='stibp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='overflow-recov'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='succor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lbrv'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-scale'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='flushbyasid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pause-filter'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pfthreshold'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='disable' name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='custom' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Dhyana-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v6'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v7'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <memoryBacking supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='sourceType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>anonymous</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>memfd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </memoryBacking>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <disk supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='diskDevice'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>disk</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cdrom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>floppy</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>lun</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>fdc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>sata</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <graphics supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vnc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egl-headless</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </graphics>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <video supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='modelType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vga</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cirrus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>none</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>bochs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ramfb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hostdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='mode'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>subsystem</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='startupPolicy'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>mandatory</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>requisite</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>optional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='subsysType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pci</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='capsType'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='pciBackend'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hostdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <rng supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>random</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <filesystem supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='driverType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>path</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>handle</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtiofs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </filesystem>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tpm supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-tis</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-crb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emulator</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>external</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendVersion'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>2.0</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </tpm>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <redirdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </redirdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <channel supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </channel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <crypto supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </crypto>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <interface supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>passt</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <panic supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>isa</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>hyperv</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </panic>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <console supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>null</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dev</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pipe</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stdio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>udp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tcp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu-vdagent</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </console>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <gic supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <vmcoreinfo supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <genid supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backingStoreInput supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backup supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <async-teardown supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <s390-pv supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <ps2 supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tdx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sev supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sgx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hyperv supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='features'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>relaxed</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vapic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>spinlocks</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vpindex</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>runtime</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>synic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stimer</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reset</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vendor_id</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>frequencies</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reenlightenment</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tlbflush</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ipi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>avic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emsr_bitmap</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>xmm_input</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <spinlocks>4095</spinlocks>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <stimer_direct>on</stimer_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hyperv>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <launchSecurity supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: </domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.156 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.161 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb  2 12:37:24 np0005605476 nova_compute[239846]: <domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <domain>kvm</domain>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <arch>x86_64</arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <vcpu max='240'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <iothreads supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <os supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='firmware'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <loader supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>rom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pflash</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='readonly'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>yes</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='secure'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </loader>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-passthrough' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='hostPassthroughMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='maximum' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='maximumMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-model' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <vendor>AMD</vendor>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='x2apic'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='hypervisor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='stibp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='overflow-recov'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='succor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lbrv'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-scale'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='flushbyasid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pause-filter'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pfthreshold'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='disable' name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='custom' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Dhyana-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v6'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v7'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <memoryBacking supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='sourceType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>anonymous</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>memfd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </memoryBacking>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <disk supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='diskDevice'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>disk</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cdrom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>floppy</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>lun</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ide</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>fdc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>sata</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <graphics supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vnc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egl-headless</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </graphics>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <video supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='modelType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vga</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cirrus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>none</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>bochs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ramfb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hostdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='mode'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>subsystem</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='startupPolicy'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>mandatory</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>requisite</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>optional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='subsysType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pci</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='capsType'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='pciBackend'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hostdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <rng supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>random</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <filesystem supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='driverType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>path</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>handle</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtiofs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </filesystem>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tpm supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-tis</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-crb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emulator</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>external</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendVersion'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>2.0</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </tpm>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <redirdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </redirdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <channel supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </channel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <crypto supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </crypto>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <interface supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>passt</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <panic supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>isa</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>hyperv</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </panic>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <console supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>null</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dev</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pipe</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stdio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>udp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tcp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu-vdagent</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </console>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <gic supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <vmcoreinfo supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <genid supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backingStoreInput supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backup supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <async-teardown supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <s390-pv supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <ps2 supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tdx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sev supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sgx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hyperv supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='features'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>relaxed</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vapic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>spinlocks</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vpindex</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>runtime</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>synic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stimer</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reset</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vendor_id</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>frequencies</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reenlightenment</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tlbflush</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ipi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>avic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emsr_bitmap</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>xmm_input</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <spinlocks>4095</spinlocks>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <stimer_direct>on</stimer_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hyperv>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <launchSecurity supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: </domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.236 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb  2 12:37:24 np0005605476 nova_compute[239846]: <domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <domain>kvm</domain>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <arch>x86_64</arch>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <vcpu max='4096'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <iothreads supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <os supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='firmware'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>efi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <loader supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>rom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pflash</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='readonly'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>yes</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='secure'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>yes</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>no</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </loader>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-passthrough' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='hostPassthroughMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='maximum' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='maximumMigratable'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>on</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>off</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='host-model' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <vendor>AMD</vendor>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='x2apic'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='hypervisor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='stibp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='overflow-recov'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='succor'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lbrv'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='tsc-scale'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='flushbyasid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pause-filter'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='pfthreshold'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <feature policy='disable' name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <mode name='custom' supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Broadwell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='ClearwaterForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ddpd-u'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sha512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm3'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sm4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Cooperlake-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Denverton-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Dhyana-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Milan-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Rome-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-Turin-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amd-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='auto-ibrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vp2intersect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fs-gs-base-ns'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibpb-brtype'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='no-nested-data-bp'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='null-sel-clr-base'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='perfmon-v2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbpb'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='srso-user-kernel-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='stibp-always-on'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='EPYC-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='GraniteRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-128'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-256'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx10-512'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='prefetchiti'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Haswell-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v6'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Icelake-Server-v7'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='IvyBridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='KnightsMill-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4fmaps'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-4vnniw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512er'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512pf'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G4-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Opteron_G5-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fma4'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tbm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xop'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SapphireRapids-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='amx-tile'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-bf16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-fp16'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512-vpopcntdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bitalg'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vbmi2'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrc'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fzrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='la57'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='taa-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='tsx-ldtrk'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='SierraForest-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ifma'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-ne-convert'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx-vnni-int8'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bhi-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='bus-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cmpccxadd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fbsdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='fsrs'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ibrs-all'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='intel-psfd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ipred-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='lam'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mcdt-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pbrsb-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='psdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rrsba-ctrl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='sbdr-ssdp-no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='serialize'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vaes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='vpclmulqdq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Client-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='hle'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='rtm'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Skylake-Server-v5'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512bw'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512cd'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512dq'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512f'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='avx512vl'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='invpcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pcid'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='pku'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='mpx'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v2'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v3'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='core-capability'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='split-lock-detect'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='Snowridge-v4'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='cldemote'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='erms'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='gfni'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdir64b'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='movdiri'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='xsaves'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='athlon-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='core2duo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='coreduo-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='n270-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='ss'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <blockers model='phenom-v1'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnow'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <feature name='3dnowext'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </blockers>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </mode>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <memoryBacking supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <enum name='sourceType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>anonymous</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <value>memfd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </memoryBacking>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <disk supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='diskDevice'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>disk</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cdrom</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>floppy</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>lun</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>fdc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>sata</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <graphics supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vnc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egl-headless</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </graphics>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <video supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='modelType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vga</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>cirrus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>none</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>bochs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ramfb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hostdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='mode'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>subsystem</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='startupPolicy'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>mandatory</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>requisite</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>optional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='subsysType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pci</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>scsi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='capsType'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='pciBackend'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hostdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <rng supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtio-non-transitional</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>random</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>egd</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <filesystem supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='driverType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>path</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>handle</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>virtiofs</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </filesystem>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tpm supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-tis</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tpm-crb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emulator</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>external</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendVersion'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>2.0</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </tpm>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <redirdev supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='bus'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>usb</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </redirdev>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <channel supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </channel>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <crypto supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendModel'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>builtin</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </crypto>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <interface supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='backendType'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>default</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>passt</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <panic supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='model'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>isa</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>hyperv</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </panic>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <console supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='type'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>null</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vc</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pty</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dev</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>file</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>pipe</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stdio</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>udp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tcp</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>unix</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>qemu-vdagent</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>dbus</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </console>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <gic supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <vmcoreinfo supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <genid supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backingStoreInput supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <backup supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <async-teardown supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <s390-pv supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <ps2 supported='yes'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <tdx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sev supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <sgx supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <hyperv supported='yes'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <enum name='features'>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>relaxed</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vapic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>spinlocks</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vpindex</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>runtime</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>synic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>stimer</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reset</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>vendor_id</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>frequencies</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>reenlightenment</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>tlbflush</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>ipi</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>avic</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>emsr_bitmap</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <value>xmm_input</value>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </enum>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      <defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <spinlocks>4095</spinlocks>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <stimer_direct>on</stimer_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:      </defaults>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    </hyperv>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:    <launchSecurity supported='no'/>
Feb  2 12:37:24 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: </domainCapabilities>
Feb  2 12:37:24 np0005605476 nova_compute[239846]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.299 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.299 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.300 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.303 239853 INFO nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Secure Boot support detected#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.305 239853 INFO nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.305 239853 INFO nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.313 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.375 239853 INFO nova.virt.node [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Determined node identity a0b0d175-0948-46db-92ba-608ef43a689f from /var/lib/nova/compute_id#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.390 239853 WARNING nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Compute nodes ['a0b0d175-0948-46db-92ba-608ef43a689f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.420 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.450 239853 WARNING nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.451 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.451 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.452 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.452 239853 DEBUG nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.452 239853 DEBUG oslo_concurrency.processutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:37:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:37:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732164963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:37:24 np0005605476 nova_compute[239846]: 2026-02-02 17:37:24.936 239853 DEBUG oslo_concurrency.processutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:37:24 np0005605476 systemd[1]: Starting libvirt nodedev daemon...
Feb  2 12:37:24 np0005605476 systemd[1]: Started libvirt nodedev daemon.
Feb  2 12:37:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.233 239853 WARNING nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.235 239853 DEBUG nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.235 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.235 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.251 239853 WARNING nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] No compute node record for compute-0.ctlplane.example.com:a0b0d175-0948-46db-92ba-608ef43a689f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a0b0d175-0948-46db-92ba-608ef43a689f could not be found.#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.269 239853 INFO nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a0b0d175-0948-46db-92ba-608ef43a689f#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.322 239853 DEBUG nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:37:25 np0005605476 nova_compute[239846]: 2026-02-02 17:37:25.322 239853 DEBUG nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:37:26 np0005605476 nova_compute[239846]: 2026-02-02 17:37:26.153 239853 INFO nova.scheduler.client.report [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [req-7f95ff20-c551-435f-a7ba-267c346222d9] Created resource provider record via placement API for resource provider with UUID a0b0d175-0948-46db-92ba-608ef43a689f and name compute-0.ctlplane.example.com.#033[00m
Feb  2 12:37:26 np0005605476 nova_compute[239846]: 2026-02-02 17:37:26.536 239853 DEBUG oslo_concurrency.processutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:37:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:37:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165237533' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.073 239853 DEBUG oslo_concurrency.processutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.078 239853 DEBUG nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb  2 12:37:27 np0005605476 nova_compute[239846]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.078 239853 INFO nova.virt.libvirt.host [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.079 239853 DEBUG nova.compute.provider_tree [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.079 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:37:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.128 239853 DEBUG nova.scheduler.client.report [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Updated inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.129 239853 DEBUG nova.compute.provider_tree [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Updating resource provider a0b0d175-0948-46db-92ba-608ef43a689f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.129 239853 DEBUG nova.compute.provider_tree [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.301 239853 DEBUG nova.compute.provider_tree [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Updating resource provider a0b0d175-0948-46db-92ba-608ef43a689f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.328 239853 DEBUG nova.compute.resource_tracker [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.329 239853 DEBUG oslo_concurrency.lockutils [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.329 239853 DEBUG nova.service [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.426 239853 DEBUG nova.service [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb  2 12:37:27 np0005605476 nova_compute[239846]: 2026-02-02 17:37:27.427 239853 DEBUG nova.servicegroup.drivers.db [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb  2 12:37:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:37:36
Feb  2 12:37:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:37:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:37:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Feb  2 12:37:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:37:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:37:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:37:46.624 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:37:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:37:46.625 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:37:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:37:46.625 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107637958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107637958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:37:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3399503935' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3399503935' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273682545' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:37:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273682545' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:37:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.703580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869703634, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1770, "num_deletes": 250, "total_data_size": 2978667, "memory_usage": 3014792, "flush_reason": "Manual Compaction"}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869716209, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1686917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11754, "largest_seqno": 13523, "table_properties": {"data_size": 1681108, "index_size": 2884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14658, "raw_average_key_size": 20, "raw_value_size": 1668266, "raw_average_value_size": 2294, "num_data_blocks": 133, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053672, "oldest_key_time": 1770053672, "file_creation_time": 1770053869, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12690 microseconds, and 5281 cpu microseconds.
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.716269) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1686917 bytes OK
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.716292) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.718299) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.718321) EVENT_LOG_v1 {"time_micros": 1770053869718314, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.718344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2971139, prev total WAL file size 2971139, number of live WAL files 2.
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.719146) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1647KB)], [29(7981KB)]
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869719211, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9860114, "oldest_snapshot_seqno": -1}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4018 keys, 7756825 bytes, temperature: kUnknown
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869759699, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7756825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7728088, "index_size": 17619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95583, "raw_average_key_size": 23, "raw_value_size": 7653757, "raw_average_value_size": 1904, "num_data_blocks": 769, "num_entries": 4018, "num_filter_entries": 4018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770053869, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.759918) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7756825 bytes
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.761481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.1 rd, 191.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4437, records dropped: 419 output_compression: NoCompression
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.761499) EVENT_LOG_v1 {"time_micros": 1770053869761489, "job": 12, "event": "compaction_finished", "compaction_time_micros": 40554, "compaction_time_cpu_micros": 22820, "output_level": 6, "num_output_files": 1, "total_output_size": 7756825, "num_input_records": 4437, "num_output_records": 4018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869761838, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053869763013, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.719011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.763095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.763102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.763105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.763108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:49 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:37:49.763111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:37:50 np0005605476 podman[240385]: 2026-02-02 17:37:50.63036795 +0000 UTC m=+0.069237923 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 12:37:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:54 np0005605476 podman[240404]: 2026-02-02 17:37:54.638408352 +0000 UTC m=+0.084941611 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:37:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:37:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:37:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:12 np0005605476 nova_compute[239846]: 2026-02-02 17:38:12.429 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:12 np0005605476 nova_compute[239846]: 2026-02-02 17:38:12.476 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:21 np0005605476 podman[240432]: 2026-02-02 17:38:21.615195202 +0000 UTC m=+0.058939970 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:38:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:38:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.244 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.244 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.244 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:38:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:38:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:23 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.384431299 +0000 UTC m=+0.075843132 container create ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.407 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.408 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.409 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.409 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.410 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.410 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.410 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.411 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.412 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:38:23 np0005605476 systemd[1]: Started libpod-conmon-ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae.scope.
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.337051459 +0000 UTC m=+0.028463342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.457 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.457 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.458 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.458 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.459 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.479504108 +0000 UTC m=+0.170915921 container init ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.484410848 +0000 UTC m=+0.175822651 container start ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:38:23 np0005605476 systemd[1]: libpod-ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae.scope: Deactivated successfully.
Feb  2 12:38:23 np0005605476 vigilant_morse[240613]: 167 167
Feb  2 12:38:23 np0005605476 conmon[240613]: conmon ce48a0fca2565a34e11e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae.scope/container/memory.events
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.490016787 +0000 UTC m=+0.181428630 container attach ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.490490751 +0000 UTC m=+0.181902564 container died ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:38:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay-56b5a92ff94373bc03009040fc4471c1f607fe50c9c4b3b349a2fbe77a79f261-merged.mount: Deactivated successfully.
Feb  2 12:38:23 np0005605476 podman[240596]: 2026-02-02 17:38:23.523824311 +0000 UTC m=+0.215236104 container remove ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_morse, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:38:23 np0005605476 systemd[1]: libpod-conmon-ce48a0fca2565a34e11ed6b196717667b94f36e11148f9116ae853597966f2ae.scope: Deactivated successfully.
Feb  2 12:38:23 np0005605476 podman[240657]: 2026-02-02 17:38:23.655520553 +0000 UTC m=+0.044901721 container create 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:38:23 np0005605476 systemd[1]: Started libpod-conmon-40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204.scope.
Feb  2 12:38:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:23 np0005605476 podman[240657]: 2026-02-02 17:38:23.630262753 +0000 UTC m=+0.019643921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:23 np0005605476 podman[240657]: 2026-02-02 17:38:23.734767931 +0000 UTC m=+0.124149089 container init 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:38:23 np0005605476 podman[240657]: 2026-02-02 17:38:23.740503224 +0000 UTC m=+0.129884382 container start 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:38:23 np0005605476 podman[240657]: 2026-02-02 17:38:23.744798386 +0000 UTC m=+0.134179604 container attach 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:38:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:38:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548601879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:38:23 np0005605476 nova_compute[239846]: 2026-02-02 17:38:23.994 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:38:24 np0005605476 silly_swirles[240673]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:38:24 np0005605476 silly_swirles[240673]: --> All data devices are unavailable
Feb  2 12:38:24 np0005605476 systemd[1]: libpod-40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204.scope: Deactivated successfully.
Feb  2 12:38:24 np0005605476 podman[240657]: 2026-02-02 17:38:24.110268749 +0000 UTC m=+0.499649907 container died 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:38:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ef04335517d8a669351c7309dfaaa26cffd6e01829abe3dc7bedb5be5be57c8e-merged.mount: Deactivated successfully.
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.159 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.161 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.161 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.162 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:38:24 np0005605476 podman[240657]: 2026-02-02 17:38:24.169656131 +0000 UTC m=+0.559037289 container remove 40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:38:24 np0005605476 systemd[1]: libpod-conmon-40f8df57623f0f13281297634b48bd1d17d611a5798578418c5f1d3fde53a204.scope: Deactivated successfully.
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.243 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.243 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.260 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.607611117 +0000 UTC m=+0.035213233 container create 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:38:24 np0005605476 systemd[1]: Started libpod-conmon-5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f.scope.
Feb  2 12:38:24 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.681146153 +0000 UTC m=+0.108748319 container init 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.592249451 +0000 UTC m=+0.019851597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.690215231 +0000 UTC m=+0.117817357 container start 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.69508265 +0000 UTC m=+0.122684776 container attach 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:38:24 np0005605476 loving_galois[240805]: 167 167
Feb  2 12:38:24 np0005605476 systemd[1]: libpod-5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f.scope: Deactivated successfully.
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.696372056 +0000 UTC m=+0.123974212 container died 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:38:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-02c1219f46418f3b0a94847e38b3d68f1e0156657d56e02f21770357e75113a2-merged.mount: Deactivated successfully.
Feb  2 12:38:24 np0005605476 podman[240788]: 2026-02-02 17:38:24.733492094 +0000 UTC m=+0.161094230 container remove 5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:38:24 np0005605476 systemd[1]: libpod-conmon-5ecaaee7585e9fac00e2ecce70d103d4b2a6b1f8a093385b7cb6430ddc7cfe6f.scope: Deactivated successfully.
Feb  2 12:38:24 np0005605476 podman[240808]: 2026-02-02 17:38:24.771004003 +0000 UTC m=+0.092562628 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:38:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:38:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171302704' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.796 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:38:24 np0005605476 nova_compute[239846]: 2026-02-02 17:38:24.801 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:38:24 np0005605476 podman[240856]: 2026-02-02 17:38:24.881542642 +0000 UTC m=+0.056683996 container create b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:38:24 np0005605476 systemd[1]: Started libpod-conmon-b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593.scope.
Feb  2 12:38:24 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:24 np0005605476 podman[240856]: 2026-02-02 17:38:24.855778098 +0000 UTC m=+0.030919492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4009de0d95a15c106843357e30cd6464c37dd773ff3a1f452dac8c7fd4bbbb13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4009de0d95a15c106843357e30cd6464c37dd773ff3a1f452dac8c7fd4bbbb13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4009de0d95a15c106843357e30cd6464c37dd773ff3a1f452dac8c7fd4bbbb13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4009de0d95a15c106843357e30cd6464c37dd773ff3a1f452dac8c7fd4bbbb13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:24 np0005605476 podman[240856]: 2026-02-02 17:38:24.977020642 +0000 UTC m=+0.152162016 container init b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:38:24 np0005605476 podman[240856]: 2026-02-02 17:38:24.982509629 +0000 UTC m=+0.157650973 container start b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb  2 12:38:24 np0005605476 podman[240856]: 2026-02-02 17:38:24.989202069 +0000 UTC m=+0.164343443 container attach b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:38:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]: {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    "0": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "devices": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "/dev/loop3"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            ],
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_name": "ceph_lv0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_size": "21470642176",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "name": "ceph_lv0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "tags": {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_name": "ceph",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.crush_device_class": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.encrypted": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.objectstore": "bluestore",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_id": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.vdo": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.with_tpm": "0"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            },
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "vg_name": "ceph_vg0"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        }
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    ],
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    "1": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "devices": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "/dev/loop4"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            ],
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_name": "ceph_lv1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_size": "21470642176",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "name": "ceph_lv1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "tags": {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_name": "ceph",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.crush_device_class": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.encrypted": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.objectstore": "bluestore",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_id": "1",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.vdo": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.with_tpm": "0"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            },
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "vg_name": "ceph_vg1"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        }
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    ],
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    "2": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "devices": [
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "/dev/loop5"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            ],
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_name": "ceph_lv2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_size": "21470642176",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "name": "ceph_lv2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "tags": {
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.cluster_name": "ceph",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.crush_device_class": "",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.encrypted": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.objectstore": "bluestore",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osd_id": "2",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.vdo": "0",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:                "ceph.with_tpm": "0"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            },
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "type": "block",
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:            "vg_name": "ceph_vg2"
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:        }
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]:    ]
Feb  2 12:38:25 np0005605476 dazzling_cerf[240872]: }
Feb  2 12:38:25 np0005605476 systemd[1]: libpod-b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593.scope: Deactivated successfully.
Feb  2 12:38:25 np0005605476 podman[240856]: 2026-02-02 17:38:25.261893908 +0000 UTC m=+0.437035262 container died b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:38:25 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4009de0d95a15c106843357e30cd6464c37dd773ff3a1f452dac8c7fd4bbbb13-merged.mount: Deactivated successfully.
Feb  2 12:38:25 np0005605476 podman[240856]: 2026-02-02 17:38:25.468417292 +0000 UTC m=+0.643558646 container remove b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:38:25 np0005605476 systemd[1]: libpod-conmon-b5567d8a3c952dbf4b90f27f5e3a551b0812c361f4c977786e630974f490c593.scope: Deactivated successfully.
Feb  2 12:38:25 np0005605476 nova_compute[239846]: 2026-02-02 17:38:25.502 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:38:25 np0005605476 nova_compute[239846]: 2026-02-02 17:38:25.507 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:38:25 np0005605476 nova_compute[239846]: 2026-02-02 17:38:25.507 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:38:25 np0005605476 podman[240956]: 2026-02-02 17:38:25.972028981 +0000 UTC m=+0.104539500 container create ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:38:25 np0005605476 podman[240956]: 2026-02-02 17:38:25.899523465 +0000 UTC m=+0.032033974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:26 np0005605476 systemd[1]: Started libpod-conmon-ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99.scope.
Feb  2 12:38:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:26 np0005605476 podman[240956]: 2026-02-02 17:38:26.058739181 +0000 UTC m=+0.191249710 container init ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:38:26 np0005605476 podman[240956]: 2026-02-02 17:38:26.067085659 +0000 UTC m=+0.199596158 container start ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:38:26 np0005605476 podman[240956]: 2026-02-02 17:38:26.070162547 +0000 UTC m=+0.202673046 container attach ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:38:26 np0005605476 thirsty_greider[240973]: 167 167
Feb  2 12:38:26 np0005605476 systemd[1]: libpod-ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99.scope: Deactivated successfully.
Feb  2 12:38:26 np0005605476 podman[240956]: 2026-02-02 17:38:26.071955108 +0000 UTC m=+0.204465607 container died ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:38:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1f1e796f60db31570326dd987ac77f222531a953d7fa11bc3b05b46bf9d95a86-merged.mount: Deactivated successfully.
Feb  2 12:38:26 np0005605476 podman[240956]: 2026-02-02 17:38:26.10466451 +0000 UTC m=+0.237175009 container remove ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:38:26 np0005605476 systemd[1]: libpod-conmon-ff868f1d8ce2e8fac34fc14a064314c108d84be8329672963cf8f3d26c52fc99.scope: Deactivated successfully.
Feb  2 12:38:26 np0005605476 podman[240998]: 2026-02-02 17:38:26.273281134 +0000 UTC m=+0.067631208 container create 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:38:26 np0005605476 systemd[1]: Started libpod-conmon-9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639.scope.
Feb  2 12:38:26 np0005605476 podman[240998]: 2026-02-02 17:38:26.23279345 +0000 UTC m=+0.027143524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:38:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:38:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9dffdbf94ce29af18575e977ef0c4888735b8d46e6884557fb9babdecace8db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9dffdbf94ce29af18575e977ef0c4888735b8d46e6884557fb9babdecace8db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9dffdbf94ce29af18575e977ef0c4888735b8d46e6884557fb9babdecace8db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9dffdbf94ce29af18575e977ef0c4888735b8d46e6884557fb9babdecace8db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:38:26 np0005605476 podman[240998]: 2026-02-02 17:38:26.357209305 +0000 UTC m=+0.151559409 container init 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:38:26 np0005605476 podman[240998]: 2026-02-02 17:38:26.363429102 +0000 UTC m=+0.157779206 container start 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:38:26 np0005605476 podman[240998]: 2026-02-02 17:38:26.367463297 +0000 UTC m=+0.161813421 container attach 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:38:26 np0005605476 lvm[241093]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:38:26 np0005605476 lvm[241093]: VG ceph_vg1 finished
Feb  2 12:38:26 np0005605476 lvm[241092]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:38:26 np0005605476 lvm[241092]: VG ceph_vg0 finished
Feb  2 12:38:26 np0005605476 lvm[241095]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:38:26 np0005605476 lvm[241095]: VG ceph_vg2 finished
Feb  2 12:38:27 np0005605476 nostalgic_haslett[241014]: {}
Feb  2 12:38:27 np0005605476 systemd[1]: libpod-9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639.scope: Deactivated successfully.
Feb  2 12:38:27 np0005605476 systemd[1]: libpod-9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639.scope: Consumed 1.004s CPU time.
Feb  2 12:38:27 np0005605476 podman[240998]: 2026-02-02 17:38:27.078716331 +0000 UTC m=+0.873066435 container died 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:38:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f9dffdbf94ce29af18575e977ef0c4888735b8d46e6884557fb9babdecace8db-merged.mount: Deactivated successfully.
Feb  2 12:38:27 np0005605476 podman[240998]: 2026-02-02 17:38:27.127498601 +0000 UTC m=+0.921848715 container remove 9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:38:27 np0005605476 systemd[1]: libpod-conmon-9ee53ce4df8a1c7466d117e5b8db6a14aea73bafb5eb6ab26bd9aa831a944639.scope: Deactivated successfully.
Feb  2 12:38:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:38:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:38:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:38:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  2 12:38:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2119855642' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  2 12:38:28 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  2 12:38:28 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 12:38:28 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 12:38:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:38:36
Feb  2 12:38:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:38:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:38:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control']
Feb  2 12:38:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:38:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:38:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:38:46.626 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:38:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:38:46.627 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:38:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:38:46.627 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:38:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:38:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  2 12:38:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559344312' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  2 12:38:51 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  2 12:38:51 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 12:38:51 np0005605476 ceph-mgr[75493]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 12:38:52 np0005605476 podman[241134]: 2026-02-02 17:38:52.603073942 +0000 UTC m=+0.057167610 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 12:38:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:38:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:55 np0005605476 podman[241154]: 2026-02-02 17:38:55.621745545 +0000 UTC m=+0.077344475 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb  2 12:38:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:38:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:39:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546239690' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:39:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:39:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546239690' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:39:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:23 np0005605476 podman[241180]: 2026-02-02 17:39:23.607030391 +0000 UTC m=+0.053942763 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:39:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.501 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.501 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.518 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.518 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.519 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.531 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.531 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.532 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.532 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.532 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.532 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.533 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.533 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.533 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.567 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.568 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.568 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.568 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:39:25 np0005605476 nova_compute[239846]: 2026-02-02 17:39:25.568 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:39:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:39:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227803208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.138 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.303 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.304 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.304 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.305 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.367 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.367 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:39:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:26.373 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:39:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:26.374 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:39:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:26.375 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.401 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:39:26 np0005605476 podman[241240]: 2026-02-02 17:39:26.628864413 +0000 UTC m=+0.077867588 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Feb  2 12:39:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:39:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968469294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.945 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.949 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.963 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.966 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:39:26 np0005605476 nova_compute[239846]: 2026-02-02 17:39:26.966 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:39:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:39:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.246693795 +0000 UTC m=+0.036104146 container create e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:39:28 np0005605476 systemd[1]: Started libpod-conmon-e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8.scope.
Feb  2 12:39:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.322178276 +0000 UTC m=+0.111588667 container init e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.229932399 +0000 UTC m=+0.019342780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.327225027 +0000 UTC m=+0.116635388 container start e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:39:28 np0005605476 elastic_shannon[241428]: 167 167
Feb  2 12:39:28 np0005605476 systemd[1]: libpod-e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8.scope: Deactivated successfully.
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.342676387 +0000 UTC m=+0.132086748 container attach e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.343934032 +0000 UTC m=+0.133344403 container died e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:39:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-74da5f30ca6838f1e5ac1885437b414f58d7daa4088a495c6bca10177fc292a9-merged.mount: Deactivated successfully.
Feb  2 12:39:28 np0005605476 podman[241412]: 2026-02-02 17:39:28.536849042 +0000 UTC m=+0.326259383 container remove e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:39:28 np0005605476 systemd[1]: libpod-conmon-e3d70f207a9c3c3eeb06a044b65d52f98a3871f7c9741ffcadc8c94ab9755ce8.scope: Deactivated successfully.
Feb  2 12:39:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:39:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:28 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:39:28 np0005605476 podman[241454]: 2026-02-02 17:39:28.681799747 +0000 UTC m=+0.053391048 container create 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:39:28 np0005605476 systemd[1]: Started libpod-conmon-303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b.scope.
Feb  2 12:39:28 np0005605476 podman[241454]: 2026-02-02 17:39:28.656386069 +0000 UTC m=+0.027977390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:28 np0005605476 podman[241454]: 2026-02-02 17:39:28.802545967 +0000 UTC m=+0.174137288 container init 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:39:28 np0005605476 podman[241454]: 2026-02-02 17:39:28.80947238 +0000 UTC m=+0.181063691 container start 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:39:28 np0005605476 podman[241454]: 2026-02-02 17:39:28.813281696 +0000 UTC m=+0.184873007 container attach 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:39:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:29 np0005605476 suspicious_euclid[241471]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:39:29 np0005605476 suspicious_euclid[241471]: --> All data devices are unavailable
Feb  2 12:39:29 np0005605476 systemd[1]: libpod-303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b.scope: Deactivated successfully.
Feb  2 12:39:29 np0005605476 podman[241454]: 2026-02-02 17:39:29.309347363 +0000 UTC m=+0.680938704 container died 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:39:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-35fa63d00eba0180b0455f02c4f8be77dc100bae0be5b51d362103c7cb2b41dd-merged.mount: Deactivated successfully.
Feb  2 12:39:29 np0005605476 podman[241454]: 2026-02-02 17:39:29.356495136 +0000 UTC m=+0.728086477 container remove 303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_euclid, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:39:29 np0005605476 systemd[1]: libpod-conmon-303645a6953bcaaf40bd75e6e016e80b02c4805348d6416a92b999955586393b.scope: Deactivated successfully.
Feb  2 12:39:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.776564158 +0000 UTC m=+0.049940271 container create 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.745254317 +0000 UTC m=+0.018630430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:29 np0005605476 systemd[1]: Started libpod-conmon-89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a.scope.
Feb  2 12:39:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.908508671 +0000 UTC m=+0.181884814 container init 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.916724999 +0000 UTC m=+0.190101122 container start 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.920146785 +0000 UTC m=+0.193522928 container attach 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:39:29 np0005605476 suspicious_keldysh[241582]: 167 167
Feb  2 12:39:29 np0005605476 systemd[1]: libpod-89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a.scope: Deactivated successfully.
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.921891273 +0000 UTC m=+0.195267396 container died 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:39:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-94fb8b128250e227001b8ca75bea31c0d7e55ebf83e7d614b38dfabc23de4e0e-merged.mount: Deactivated successfully.
Feb  2 12:39:29 np0005605476 podman[241566]: 2026-02-02 17:39:29.956106076 +0000 UTC m=+0.229482179 container remove 89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:39:29 np0005605476 systemd[1]: libpod-conmon-89b91d70c1e44815913ba7336198a747f53feab7424ef2c4f66306f7531c1f1a.scope: Deactivated successfully.
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.124130633 +0000 UTC m=+0.053227663 container create a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:39:30 np0005605476 systemd[1]: Started libpod-conmon-a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90.scope.
Feb  2 12:39:30 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7a0b8ac1403551af67be9cbbc4a82339a4ea1809b86ddee2c27e98bb38b4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7a0b8ac1403551af67be9cbbc4a82339a4ea1809b86ddee2c27e98bb38b4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7a0b8ac1403551af67be9cbbc4a82339a4ea1809b86ddee2c27e98bb38b4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7a0b8ac1403551af67be9cbbc4a82339a4ea1809b86ddee2c27e98bb38b4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.099032434 +0000 UTC m=+0.028129524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.213663435 +0000 UTC m=+0.142760505 container init a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.221008139 +0000 UTC m=+0.150105169 container start a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.225513925 +0000 UTC m=+0.154611025 container attach a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]: {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    "0": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "devices": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "/dev/loop3"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            ],
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_name": "ceph_lv0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_size": "21470642176",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "name": "ceph_lv0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "tags": {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_name": "ceph",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.crush_device_class": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.encrypted": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.objectstore": "bluestore",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_id": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.vdo": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.with_tpm": "0"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            },
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "vg_name": "ceph_vg0"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        }
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    ],
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    "1": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "devices": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "/dev/loop4"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            ],
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_name": "ceph_lv1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_size": "21470642176",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "name": "ceph_lv1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "tags": {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_name": "ceph",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.crush_device_class": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.encrypted": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.objectstore": "bluestore",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_id": "1",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.vdo": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.with_tpm": "0"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            },
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "vg_name": "ceph_vg1"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        }
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    ],
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    "2": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "devices": [
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "/dev/loop5"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            ],
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_name": "ceph_lv2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_size": "21470642176",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "name": "ceph_lv2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "tags": {
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.cluster_name": "ceph",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.crush_device_class": "",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.encrypted": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.objectstore": "bluestore",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osd_id": "2",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.vdo": "0",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:                "ceph.with_tpm": "0"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            },
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "type": "block",
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:            "vg_name": "ceph_vg2"
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:        }
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]:    ]
Feb  2 12:39:30 np0005605476 hungry_ardinghelli[241621]: }
Feb  2 12:39:30 np0005605476 systemd[1]: libpod-a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90.scope: Deactivated successfully.
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.492909618 +0000 UTC m=+0.422006608 container died a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:39:30 np0005605476 systemd[1]: var-lib-containers-storage-overlay-42f7a0b8ac1403551af67be9cbbc4a82339a4ea1809b86ddee2c27e98bb38b4d-merged.mount: Deactivated successfully.
Feb  2 12:39:30 np0005605476 podman[241605]: 2026-02-02 17:39:30.61695629 +0000 UTC m=+0.546053280 container remove a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:39:30 np0005605476 systemd[1]: libpod-conmon-a9c3a39d75730ab3af16bd08aeab0c629f46fcbc0dfa5d0deef7d6a4f118af90.scope: Deactivated successfully.
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.020489483 +0000 UTC m=+0.024007740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.147641892 +0000 UTC m=+0.151160149 container create 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:39:31 np0005605476 systemd[1]: Started libpod-conmon-641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217.scope.
Feb  2 12:39:31 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.272921899 +0000 UTC m=+0.276440166 container init 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.281962661 +0000 UTC m=+0.285480938 container start 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:39:31 np0005605476 romantic_vaughan[241720]: 167 167
Feb  2 12:39:31 np0005605476 systemd[1]: libpod-641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217.scope: Deactivated successfully.
Feb  2 12:39:31 np0005605476 conmon[241720]: conmon 641075da92300cd9fa39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217.scope/container/memory.events
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.330024269 +0000 UTC m=+0.333542556 container attach 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.33188058 +0000 UTC m=+0.335398837 container died 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:39:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4f3455792e09bb6a55f8aeba538deedf7013194ef6c410ba3c508b9a5a847ffa-merged.mount: Deactivated successfully.
Feb  2 12:39:31 np0005605476 podman[241704]: 2026-02-02 17:39:31.588111892 +0000 UTC m=+0.591630149 container remove 641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:39:31 np0005605476 systemd[1]: libpod-conmon-641075da92300cd9fa3904ba8599f4d24b6ed4b421c62c6741df32ed4fab2217.scope: Deactivated successfully.
Feb  2 12:39:31 np0005605476 podman[241744]: 2026-02-02 17:39:31.774396538 +0000 UTC m=+0.049556511 container create b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:39:31 np0005605476 systemd[1]: Started libpod-conmon-b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a.scope.
Feb  2 12:39:31 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:39:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe7e8584f933c6b3f6246627c8c7c3a45597f7b95afce249c91027bd50d6b30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe7e8584f933c6b3f6246627c8c7c3a45597f7b95afce249c91027bd50d6b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe7e8584f933c6b3f6246627c8c7c3a45597f7b95afce249c91027bd50d6b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe7e8584f933c6b3f6246627c8c7c3a45597f7b95afce249c91027bd50d6b30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:39:31 np0005605476 podman[241744]: 2026-02-02 17:39:31.754640808 +0000 UTC m=+0.029800781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:39:31 np0005605476 podman[241744]: 2026-02-02 17:39:31.864124725 +0000 UTC m=+0.139284708 container init b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 12:39:31 np0005605476 podman[241744]: 2026-02-02 17:39:31.869155015 +0000 UTC m=+0.144314978 container start b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:39:31 np0005605476 podman[241744]: 2026-02-02 17:39:31.87326656 +0000 UTC m=+0.148426503 container attach b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:39:32 np0005605476 lvm[241840]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:39:32 np0005605476 lvm[241837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:39:32 np0005605476 lvm[241837]: VG ceph_vg0 finished
Feb  2 12:39:32 np0005605476 lvm[241840]: VG ceph_vg1 finished
Feb  2 12:39:32 np0005605476 lvm[241842]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:39:32 np0005605476 lvm[241842]: VG ceph_vg2 finished
Feb  2 12:39:32 np0005605476 lvm[241843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:39:32 np0005605476 lvm[241843]: VG ceph_vg0 finished
Feb  2 12:39:32 np0005605476 crazy_wilson[241761]: {}
Feb  2 12:39:32 np0005605476 systemd[1]: libpod-b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a.scope: Deactivated successfully.
Feb  2 12:39:32 np0005605476 systemd[1]: libpod-b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a.scope: Consumed 1.157s CPU time.
Feb  2 12:39:32 np0005605476 podman[241744]: 2026-02-02 17:39:32.64753364 +0000 UTC m=+0.922693573 container died b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:39:32 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ffe7e8584f933c6b3f6246627c8c7c3a45597f7b95afce249c91027bd50d6b30-merged.mount: Deactivated successfully.
Feb  2 12:39:32 np0005605476 podman[241744]: 2026-02-02 17:39:32.700051832 +0000 UTC m=+0.975211795 container remove b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:39:32 np0005605476 systemd[1]: libpod-conmon-b41aa43f5d39718f0c2fd61c171dc8127d77f0d7c1e5fdf91e01ed88d61c539a.scope: Deactivated successfully.
Feb  2 12:39:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:39:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:39:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:39:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:39:36
Feb  2 12:39:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:39:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:39:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Feb  2 12:39:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:39:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:39:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.727871) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979727987, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1385, "num_deletes": 506, "total_data_size": 1663930, "memory_usage": 1692704, "flush_reason": "Manual Compaction"}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979739135, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1636599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13524, "largest_seqno": 14908, "table_properties": {"data_size": 1630553, "index_size": 2801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 15300, "raw_average_key_size": 18, "raw_value_size": 1616478, "raw_average_value_size": 1912, "num_data_blocks": 128, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053870, "oldest_key_time": 1770053870, "file_creation_time": 1770053979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11300 microseconds, and 4250 cpu microseconds.
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.739190) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1636599 bytes OK
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.739212) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.740519) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.740538) EVENT_LOG_v1 {"time_micros": 1770053979740533, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.740557) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1656681, prev total WAL file size 1656681, number of live WAL files 2.
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.741320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1598KB)], [32(7575KB)]
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979741435, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9393424, "oldest_snapshot_seqno": -1}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3838 keys, 7375633 bytes, temperature: kUnknown
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979776491, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7375633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7348310, "index_size": 16653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93938, "raw_average_key_size": 24, "raw_value_size": 7277185, "raw_average_value_size": 1896, "num_data_blocks": 706, "num_entries": 3838, "num_filter_entries": 3838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770053979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.776902) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7375633 bytes
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.778018) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.8 rd, 209.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4863, records dropped: 1025 output_compression: NoCompression
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.778043) EVENT_LOG_v1 {"time_micros": 1770053979778030, "job": 14, "event": "compaction_finished", "compaction_time_micros": 35208, "compaction_time_cpu_micros": 15984, "output_level": 6, "num_output_files": 1, "total_output_size": 7375633, "num_input_records": 4863, "num_output_records": 3838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979778322, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770053979779360, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.741098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.779426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.779433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.779436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.779438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:39 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:39:39.779441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:39:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:46.628 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:39:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:46.629 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:39:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:39:46.629 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:39:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:39:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:54 np0005605476 podman[241885]: 2026-02-02 17:39:54.61064539 +0000 UTC m=+0.060356712 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:39:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:39:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:57 np0005605476 podman[241905]: 2026-02-02 17:39:57.63847693 +0000 UTC m=+0.089025339 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:39:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:39:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:40:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1513844421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:40:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:40:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1513844421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:40:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:40:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3367 writes, 15K keys, 3367 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3367 writes, 3367 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1302 writes, 5901 keys, 1302 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s#012Interval WAL: 1302 writes, 1302 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.8      0.16              0.04         7    0.022       0      0       0.0       0.0#012  L6      1/0    7.03 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    207.5    170.4      0.25              0.10         6    0.042     24K   3204       0.0       0.0#012 Sum      1/0    7.03 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    127.4    144.7      0.41              0.14        13    0.031     24K   3204       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    129.0    130.1      0.27              0.10         8    0.034     17K   2472       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    207.5    170.4      0.25              0.10         6    0.042     24K   3204       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.4      0.15              0.04         6    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 308.00 MB usage: 1.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(104,1.69 MB,0.549415%) FilterBlock(14,75.61 KB,0.0239731%) IndexBlock(14,149.30 KB,0.0473369%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 12:40:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:25 np0005605476 podman[241931]: 2026-02-02 17:40:25.643009806 +0000 UTC m=+0.082577789 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.969 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.970 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.970 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.970 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.993 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.994 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.995 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.995 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.995 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.995 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.995 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:40:26 np0005605476 nova_compute[239846]: 2026-02-02 17:40:26.996 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.021 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.021 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.021 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.022 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.022 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:40:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:40:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176247672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.514 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.653 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.654 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.654 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.654 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.730 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.730 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:40:27 np0005605476 nova_compute[239846]: 2026-02-02 17:40:27.752 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:40:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:40:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305518241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.329 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.334 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.362 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.365 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.366 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:40:28 np0005605476 nova_compute[239846]: 2026-02-02 17:40:28.613 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:40:28 np0005605476 podman[241995]: 2026-02-02 17:40:28.688152476 +0000 UTC m=+0.142840677 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb  2 12:40:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:40:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.117168313 +0000 UTC m=+0.058663064 container create d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:40:34 np0005605476 systemd[1]: Started libpod-conmon-d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67.scope.
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.092391353 +0000 UTC m=+0.033886184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.213161435 +0000 UTC m=+0.154656246 container init d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.22161443 +0000 UTC m=+0.163109201 container start d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:40:34 np0005605476 gifted_lichterman[242184]: 167 167
Feb  2 12:40:34 np0005605476 systemd[1]: libpod-d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67.scope: Deactivated successfully.
Feb  2 12:40:34 np0005605476 conmon[242184]: conmon d1f8b50d32789b86cd25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67.scope/container/memory.events
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.230632171 +0000 UTC m=+0.172127022 container attach d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.231853595 +0000 UTC m=+0.173348376 container died d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:40:34 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:40:34 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:34 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:40:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d17f957aebaa94d93c0a0c69463c01333fd4e811f5e90bbd3658d9415608dd0d-merged.mount: Deactivated successfully.
Feb  2 12:40:34 np0005605476 podman[242167]: 2026-02-02 17:40:34.307359957 +0000 UTC m=+0.248854738 container remove d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lichterman, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:40:34 np0005605476 systemd[1]: libpod-conmon-d1f8b50d32789b86cd25439139cf103e5b0838da6ce5ff5ffa9f9d242a905e67.scope: Deactivated successfully.
Feb  2 12:40:34 np0005605476 podman[242210]: 2026-02-02 17:40:34.468917044 +0000 UTC m=+0.045289832 container create 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:40:34 np0005605476 systemd[1]: Started libpod-conmon-46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5.scope.
Feb  2 12:40:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:34 np0005605476 podman[242210]: 2026-02-02 17:40:34.441651485 +0000 UTC m=+0.018024273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:34 np0005605476 podman[242210]: 2026-02-02 17:40:34.565987536 +0000 UTC m=+0.142360384 container init 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:40:34 np0005605476 podman[242210]: 2026-02-02 17:40:34.578050802 +0000 UTC m=+0.154423560 container start 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:40:34 np0005605476 podman[242210]: 2026-02-02 17:40:34.585044366 +0000 UTC m=+0.161417154 container attach 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb  2 12:40:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:35 np0005605476 youthful_faraday[242226]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:40:35 np0005605476 youthful_faraday[242226]: --> All data devices are unavailable
Feb  2 12:40:35 np0005605476 systemd[1]: libpod-46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5.scope: Deactivated successfully.
Feb  2 12:40:35 np0005605476 podman[242210]: 2026-02-02 17:40:35.060141031 +0000 UTC m=+0.636513799 container died 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:40:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e2fd7a1bf08e6894503733134d4a8a63b0e73f1be1363ee60520e15e9c124a46-merged.mount: Deactivated successfully.
Feb  2 12:40:35 np0005605476 podman[242210]: 2026-02-02 17:40:35.125337795 +0000 UTC m=+0.701710573 container remove 46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:40:35 np0005605476 systemd[1]: libpod-conmon-46f52ad5649653e5cf1b39fbe9dca6ba5fda8714e3abb2bce0cae184de0919e5.scope: Deactivated successfully.
Feb  2 12:40:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.586502702 +0000 UTC m=+0.059191139 container create 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:40:35 np0005605476 systemd[1]: Started libpod-conmon-1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654.scope.
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.546624682 +0000 UTC m=+0.019313119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.664936245 +0000 UTC m=+0.137624652 container init 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.67086858 +0000 UTC m=+0.143556977 container start 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:40:35 np0005605476 nostalgic_kare[242335]: 167 167
Feb  2 12:40:35 np0005605476 systemd[1]: libpod-1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654.scope: Deactivated successfully.
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.676451026 +0000 UTC m=+0.149139453 container attach 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.677303599 +0000 UTC m=+0.149992016 container died 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb  2 12:40:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-af72ef856cee66420c7c92c226598f35526710958ffd81bef762e4e51eaa93db-merged.mount: Deactivated successfully.
Feb  2 12:40:35 np0005605476 podman[242319]: 2026-02-02 17:40:35.721657094 +0000 UTC m=+0.194345491 container remove 1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:40:35 np0005605476 systemd[1]: libpod-conmon-1b8190887723153d5c6a8d1033b6eab58e6d8ef7addcfea81975b555d93f3654.scope: Deactivated successfully.
Feb  2 12:40:35 np0005605476 podman[242358]: 2026-02-02 17:40:35.882983615 +0000 UTC m=+0.057021229 container create 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:40:35 np0005605476 systemd[1]: Started libpod-conmon-8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e.scope.
Feb  2 12:40:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e3c6f1630ec2dd9daf09c49f6431fe0f566cdccdc09713e2377bfe4f578f17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e3c6f1630ec2dd9daf09c49f6431fe0f566cdccdc09713e2377bfe4f578f17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e3c6f1630ec2dd9daf09c49f6431fe0f566cdccdc09713e2377bfe4f578f17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e3c6f1630ec2dd9daf09c49f6431fe0f566cdccdc09713e2377bfe4f578f17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:35 np0005605476 podman[242358]: 2026-02-02 17:40:35.856500947 +0000 UTC m=+0.030538641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:35 np0005605476 podman[242358]: 2026-02-02 17:40:35.980756746 +0000 UTC m=+0.154794380 container init 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 12:40:35 np0005605476 podman[242358]: 2026-02-02 17:40:35.989461268 +0000 UTC m=+0.163498882 container start 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:40:35 np0005605476 podman[242358]: 2026-02-02 17:40:35.996432992 +0000 UTC m=+0.170470656 container attach 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]: {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    "0": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "devices": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "/dev/loop3"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            ],
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_name": "ceph_lv0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_size": "21470642176",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "name": "ceph_lv0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "tags": {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_name": "ceph",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.crush_device_class": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.encrypted": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.objectstore": "bluestore",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_id": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.vdo": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.with_tpm": "0"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            },
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "vg_name": "ceph_vg0"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        }
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    ],
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    "1": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "devices": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "/dev/loop4"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            ],
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_name": "ceph_lv1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_size": "21470642176",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "name": "ceph_lv1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "tags": {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_name": "ceph",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.crush_device_class": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.encrypted": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.objectstore": "bluestore",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_id": "1",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.vdo": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.with_tpm": "0"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            },
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "vg_name": "ceph_vg1"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        }
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    ],
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    "2": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "devices": [
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "/dev/loop5"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            ],
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_name": "ceph_lv2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_size": "21470642176",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "name": "ceph_lv2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "tags": {
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.cluster_name": "ceph",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.crush_device_class": "",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.encrypted": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.objectstore": "bluestore",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osd_id": "2",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.vdo": "0",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:                "ceph.with_tpm": "0"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            },
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "type": "block",
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:            "vg_name": "ceph_vg2"
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:        }
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]:    ]
Feb  2 12:40:36 np0005605476 hungry_blackburn[242374]: }
Feb  2 12:40:36 np0005605476 systemd[1]: libpod-8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e.scope: Deactivated successfully.
Feb  2 12:40:36 np0005605476 podman[242358]: 2026-02-02 17:40:36.281309852 +0000 UTC m=+0.455347466 container died 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:40:36 np0005605476 systemd[1]: var-lib-containers-storage-overlay-85e3c6f1630ec2dd9daf09c49f6431fe0f566cdccdc09713e2377bfe4f578f17-merged.mount: Deactivated successfully.
Feb  2 12:40:36 np0005605476 podman[242358]: 2026-02-02 17:40:36.326284654 +0000 UTC m=+0.500322238 container remove 8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:40:36 np0005605476 systemd[1]: libpod-conmon-8d43e6fc9df21c4208aa3bcaeafb1ef8b4b69c2ddcce5e2cfda8bd233731a79e.scope: Deactivated successfully.
Feb  2 12:40:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:40:36
Feb  2 12:40:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:40:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:40:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images']
Feb  2 12:40:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:40:36 np0005605476 podman[242457]: 2026-02-02 17:40:36.860450382 +0000 UTC m=+0.109947201 container create f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:40:36 np0005605476 podman[242457]: 2026-02-02 17:40:36.780574139 +0000 UTC m=+0.030070998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:36 np0005605476 systemd[1]: Started libpod-conmon-f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde.scope.
Feb  2 12:40:36 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:37 np0005605476 podman[242457]: 2026-02-02 17:40:37.020365084 +0000 UTC m=+0.269861893 container init f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 12:40:37 np0005605476 podman[242457]: 2026-02-02 17:40:37.031769651 +0000 UTC m=+0.281266470 container start f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:40:37 np0005605476 suspicious_williamson[242473]: 167 167
Feb  2 12:40:37 np0005605476 systemd[1]: libpod-f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde.scope: Deactivated successfully.
Feb  2 12:40:37 np0005605476 podman[242457]: 2026-02-02 17:40:37.063653748 +0000 UTC m=+0.313150567 container attach f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:40:37 np0005605476 podman[242457]: 2026-02-02 17:40:37.065809988 +0000 UTC m=+0.315306787 container died f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:37 np0005605476 systemd[1]: var-lib-containers-storage-overlay-68279a1df43ca457893aea5dbf1e182c313d82417377a7013ead1d4657f4ff81-merged.mount: Deactivated successfully.
Feb  2 12:40:37 np0005605476 podman[242457]: 2026-02-02 17:40:37.372463273 +0000 UTC m=+0.621960062 container remove f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:40:37 np0005605476 systemd[1]: libpod-conmon-f7654c8f6918eed66ca778531dd3ee98c4ca2271cc777c6848387473fe046fde.scope: Deactivated successfully.
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:40:37 np0005605476 podman[242499]: 2026-02-02 17:40:37.532504808 +0000 UTC m=+0.055517516 container create 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:40:37 np0005605476 podman[242499]: 2026-02-02 17:40:37.495767986 +0000 UTC m=+0.018780724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:40:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:40:37 np0005605476 systemd[1]: Started libpod-conmon-9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99.scope.
Feb  2 12:40:37 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:40:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56a6aeef6ab0ac7f705bf3f7d14b0e809232303e0f1b04c90f30596f4ed541d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56a6aeef6ab0ac7f705bf3f7d14b0e809232303e0f1b04c90f30596f4ed541d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56a6aeef6ab0ac7f705bf3f7d14b0e809232303e0f1b04c90f30596f4ed541d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:37 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56a6aeef6ab0ac7f705bf3f7d14b0e809232303e0f1b04c90f30596f4ed541d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:40:37 np0005605476 podman[242499]: 2026-02-02 17:40:37.718554567 +0000 UTC m=+0.241567325 container init 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:40:37 np0005605476 podman[242499]: 2026-02-02 17:40:37.729232874 +0000 UTC m=+0.252245612 container start 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:40:37 np0005605476 podman[242499]: 2026-02-02 17:40:37.75243967 +0000 UTC m=+0.275452428 container attach 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:40:38 np0005605476 lvm[242595]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:40:38 np0005605476 lvm[242595]: VG ceph_vg1 finished
Feb  2 12:40:38 np0005605476 lvm[242594]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:40:38 np0005605476 lvm[242594]: VG ceph_vg0 finished
Feb  2 12:40:38 np0005605476 lvm[242597]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:40:38 np0005605476 lvm[242597]: VG ceph_vg2 finished
Feb  2 12:40:38 np0005605476 happy_mcnulty[242516]: {}
Feb  2 12:40:38 np0005605476 systemd[1]: libpod-9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99.scope: Deactivated successfully.
Feb  2 12:40:38 np0005605476 systemd[1]: libpod-9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99.scope: Consumed 1.112s CPU time.
Feb  2 12:40:38 np0005605476 podman[242499]: 2026-02-02 17:40:38.473248484 +0000 UTC m=+0.996261242 container died 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:40:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d56a6aeef6ab0ac7f705bf3f7d14b0e809232303e0f1b04c90f30596f4ed541d-merged.mount: Deactivated successfully.
Feb  2 12:40:38 np0005605476 podman[242499]: 2026-02-02 17:40:38.554533436 +0000 UTC m=+1.077546144 container remove 9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 12:40:38 np0005605476 systemd[1]: libpod-conmon-9beecb0343b28feeffb8babcdb5ebd813f49896b9d05396d42831b64d088da99.scope: Deactivated successfully.
Feb  2 12:40:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:40:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:40:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:40:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:40:46.629 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:40:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:40:46.631 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:40:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:40:46.631 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:40:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:40:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:40:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:56 np0005605476 podman[242637]: 2026-02-02 17:40:56.608918456 +0000 UTC m=+0.050983400 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:40:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:40:59 np0005605476 podman[242656]: 2026-02-02 17:40:59.629938787 +0000 UTC m=+0.078930928 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 12:40:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:41:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2788446908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:41:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:41:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2788446908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:41:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:41:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5881 writes, 24K keys, 5881 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5881 writes, 1028 syncs, 5.72 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5572fba838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  2 12:41:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:41:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7195 writes, 29K keys, 7195 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7195 writes, 1468 syncs, 4.90 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555b258e78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  2 12:41:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:23 np0005605476 nova_compute[239846]: 2026-02-02 17:41:23.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:23 np0005605476 nova_compute[239846]: 2026-02-02 17:41:23.259 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:41:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5660 writes, 24K keys, 5660 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5660 writes, 917 syncs, 6.17 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.291 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.291 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.291 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.291 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.292 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:41:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:41:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843940836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.811 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.951 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.952 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.952 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:41:24 np0005605476 nova_compute[239846]: 2026-02-02 17:41:24.952 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.019 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.020 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.034 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:41:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:25 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 12:41:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:41:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/560494454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.556 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.561 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.579 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.581 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:41:25 np0005605476 nova_compute[239846]: 2026-02-02 17:41:25.582 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.582 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.582 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.582 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.603 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.604 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:26 np0005605476 nova_compute[239846]: 2026-02-02 17:41:26.604 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:27 np0005605476 nova_compute[239846]: 2026-02-02 17:41:27.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:27 np0005605476 nova_compute[239846]: 2026-02-02 17:41:27.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:27 np0005605476 nova_compute[239846]: 2026-02-02 17:41:27.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:27 np0005605476 nova_compute[239846]: 2026-02-02 17:41:27.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:41:27 np0005605476 nova_compute[239846]: 2026-02-02 17:41:27.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:41:27 np0005605476 podman[242726]: 2026-02-02 17:41:27.612040766 +0000 UTC m=+0.054491578 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 12:41:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:30 np0005605476 podman[242747]: 2026-02-02 17:41:30.617865393 +0000 UTC m=+0.070153004 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 12:41:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:41:36
Feb  2 12:41:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:41:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:41:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data']
Feb  2 12:41:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:41:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:41:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.736529322 +0000 UTC m=+0.065512415 container create 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:41:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:39 np0005605476 systemd[1]: Started libpod-conmon-2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a.scope.
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.695514768 +0000 UTC m=+0.024497941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.833194582 +0000 UTC m=+0.162177685 container init 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.841126215 +0000 UTC m=+0.170109308 container start 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.84558208 +0000 UTC m=+0.174565193 container attach 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:41:39 np0005605476 dreamy_babbage[242935]: 167 167
Feb  2 12:41:39 np0005605476 systemd[1]: libpod-2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a.scope: Deactivated successfully.
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.847942686 +0000 UTC m=+0.176925769 container died 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:41:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-06f89b51a380e153f030b6fa7413d781f8a766937583f0b7eed00da2b901d0df-merged.mount: Deactivated successfully.
Feb  2 12:41:39 np0005605476 podman[242919]: 2026-02-02 17:41:39.884976499 +0000 UTC m=+0.213959612 container remove 2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:41:39 np0005605476 systemd[1]: libpod-conmon-2617b8d5a5bd4ddf6764aecb801db287a82ef15f12828318a4a0946d983fe70a.scope: Deactivated successfully.
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.025182764 +0000 UTC m=+0.051247583 container create 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:41:40 np0005605476 systemd[1]: Started libpod-conmon-0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365.scope.
Feb  2 12:41:40 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:40 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.006014504 +0000 UTC m=+0.032079303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.122568784 +0000 UTC m=+0.148633593 container init 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.138310897 +0000 UTC m=+0.164375706 container start 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.142261838 +0000 UTC m=+0.168326657 container attach 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:41:40 np0005605476 zealous_thompson[242975]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:41:40 np0005605476 zealous_thompson[242975]: --> All data devices are unavailable
Feb  2 12:41:40 np0005605476 systemd[1]: libpod-0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365.scope: Deactivated successfully.
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.558235762 +0000 UTC m=+0.584300571 container died 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:41:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-38cdaa730b90392efd9ae210d2f2660cf473feb553fa94b9599eb9fc17a170de-merged.mount: Deactivated successfully.
Feb  2 12:41:40 np0005605476 podman[242959]: 2026-02-02 17:41:40.608575219 +0000 UTC m=+0.634640038 container remove 0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:41:40 np0005605476 systemd[1]: libpod-conmon-0441a3d7d69752d3e679c7b05118a2819959dffd9467536d53a755f22e917365.scope: Deactivated successfully.
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.045134842 +0000 UTC m=+0.053894047 container create 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:41:41 np0005605476 systemd[1]: Started libpod-conmon-68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013.scope.
Feb  2 12:41:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.025374266 +0000 UTC m=+0.034133561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.123104446 +0000 UTC m=+0.131863701 container init 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.128603601 +0000 UTC m=+0.137362806 container start 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:41:41 np0005605476 funny_rubin[243089]: 167 167
Feb  2 12:41:41 np0005605476 systemd[1]: libpod-68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013.scope: Deactivated successfully.
Feb  2 12:41:41 np0005605476 conmon[243089]: conmon 68b368dd2c8988af1db6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013.scope/container/memory.events
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.133176449 +0000 UTC m=+0.141935684 container attach 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.134279981 +0000 UTC m=+0.143039186 container died 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:41:41 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1ba5a0223e55c3423099b6da512d1cf9215cdebee82afdf5d0b2ba758e37669d-merged.mount: Deactivated successfully.
Feb  2 12:41:41 np0005605476 podman[243073]: 2026-02-02 17:41:41.169969895 +0000 UTC m=+0.178729100 container remove 68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:41:41 np0005605476 systemd[1]: libpod-conmon-68b368dd2c8988af1db6fca96ddfea01c6fddecb8a7f6e40f709b6ec602f8013.scope: Deactivated successfully.
Feb  2 12:41:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.300140677 +0000 UTC m=+0.047781575 container create 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:41:41 np0005605476 systemd[1]: Started libpod-conmon-3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51.scope.
Feb  2 12:41:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb0c86e5dd295494296a8e504b4b4c5f4b15d5d5bc5e506a6a3db2e4930a042/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.277317785 +0000 UTC m=+0.024958763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb0c86e5dd295494296a8e504b4b4c5f4b15d5d5bc5e506a6a3db2e4930a042/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb0c86e5dd295494296a8e504b4b4c5f4b15d5d5bc5e506a6a3db2e4930a042/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb0c86e5dd295494296a8e504b4b4c5f4b15d5d5bc5e506a6a3db2e4930a042/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.397694162 +0000 UTC m=+0.145335120 container init 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.403501326 +0000 UTC m=+0.151142234 container start 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.413849207 +0000 UTC m=+0.161490115 container attach 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]: {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    "0": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "devices": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "/dev/loop3"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            ],
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_name": "ceph_lv0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_size": "21470642176",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "name": "ceph_lv0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "tags": {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_name": "ceph",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.crush_device_class": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.encrypted": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.objectstore": "bluestore",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_id": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.vdo": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.with_tpm": "0"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            },
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "vg_name": "ceph_vg0"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        }
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    ],
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    "1": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "devices": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "/dev/loop4"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            ],
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_name": "ceph_lv1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_size": "21470642176",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "name": "ceph_lv1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "tags": {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_name": "ceph",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.crush_device_class": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.encrypted": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.objectstore": "bluestore",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_id": "1",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.vdo": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.with_tpm": "0"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            },
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "vg_name": "ceph_vg1"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        }
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    ],
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    "2": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "devices": [
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "/dev/loop5"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            ],
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_name": "ceph_lv2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_size": "21470642176",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "name": "ceph_lv2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "tags": {
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.cluster_name": "ceph",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.crush_device_class": "",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.encrypted": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.objectstore": "bluestore",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osd_id": "2",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.vdo": "0",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:                "ceph.with_tpm": "0"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            },
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "type": "block",
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:            "vg_name": "ceph_vg2"
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:        }
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]:    ]
Feb  2 12:41:41 np0005605476 sharp_ardinghelli[243129]: }
Feb  2 12:41:41 np0005605476 systemd[1]: libpod-3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51.scope: Deactivated successfully.
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.691772856 +0000 UTC m=+0.439413764 container died 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:41:41 np0005605476 systemd[1]: var-lib-containers-storage-overlay-acb0c86e5dd295494296a8e504b4b4c5f4b15d5d5bc5e506a6a3db2e4930a042-merged.mount: Deactivated successfully.
Feb  2 12:41:41 np0005605476 podman[243113]: 2026-02-02 17:41:41.737146342 +0000 UTC m=+0.484787220 container remove 3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:41:41 np0005605476 systemd[1]: libpod-conmon-3774761a80ee731435a8eecccc4807d194ee583e177fffae27f97d48f49d8c51.scope: Deactivated successfully.
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.157556692 +0000 UTC m=+0.033857064 container create 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:41:42 np0005605476 systemd[1]: Started libpod-conmon-2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c.scope.
Feb  2 12:41:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.217227221 +0000 UTC m=+0.093527613 container init 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.223761664 +0000 UTC m=+0.100062056 container start 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:41:42 np0005605476 unruffled_shockley[243230]: 167 167
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.227127569 +0000 UTC m=+0.103427951 container attach 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:41:42 np0005605476 systemd[1]: libpod-2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c.scope: Deactivated successfully.
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.227982013 +0000 UTC m=+0.104282385 container died 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.140611445 +0000 UTC m=+0.016911817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:42 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e0a4e2285cb37b2fcb5857484e72d1fb076e1ef1278611bf8f82602d2a519363-merged.mount: Deactivated successfully.
Feb  2 12:41:42 np0005605476 podman[243213]: 2026-02-02 17:41:42.260249101 +0000 UTC m=+0.136549453 container remove 2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:41:42 np0005605476 systemd[1]: libpod-conmon-2d56ebf6d01d5fe8ba18a57a93f9e2b1606add0ebc74619b96d3bdd3455f566c.scope: Deactivated successfully.
Feb  2 12:41:42 np0005605476 podman[243255]: 2026-02-02 17:41:42.407935467 +0000 UTC m=+0.048727552 container create 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:41:42 np0005605476 systemd[1]: Started libpod-conmon-58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5.scope.
Feb  2 12:41:42 np0005605476 podman[243255]: 2026-02-02 17:41:42.386048141 +0000 UTC m=+0.026840316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:41:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:41:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe3631c7ad1295c347e3199f39226a597bb3e494f0dd979f9ed7b366802188/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe3631c7ad1295c347e3199f39226a597bb3e494f0dd979f9ed7b366802188/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe3631c7ad1295c347e3199f39226a597bb3e494f0dd979f9ed7b366802188/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe3631c7ad1295c347e3199f39226a597bb3e494f0dd979f9ed7b366802188/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:41:42 np0005605476 podman[243255]: 2026-02-02 17:41:42.524794955 +0000 UTC m=+0.165587100 container init 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:41:42 np0005605476 podman[243255]: 2026-02-02 17:41:42.533545371 +0000 UTC m=+0.174337476 container start 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:41:42 np0005605476 podman[243255]: 2026-02-02 17:41:42.555602812 +0000 UTC m=+0.196394917 container attach 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:41:43 np0005605476 lvm[243348]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:41:43 np0005605476 lvm[243348]: VG ceph_vg0 finished
Feb  2 12:41:43 np0005605476 lvm[243350]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:41:43 np0005605476 lvm[243350]: VG ceph_vg1 finished
Feb  2 12:41:43 np0005605476 lvm[243352]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:41:43 np0005605476 lvm[243352]: VG ceph_vg2 finished
Feb  2 12:41:43 np0005605476 lvm[243354]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:41:43 np0005605476 lvm[243354]: VG ceph_vg2 finished
Feb  2 12:41:43 np0005605476 pedantic_shockley[243271]: {}
Feb  2 12:41:43 np0005605476 lvm[243356]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:41:43 np0005605476 lvm[243356]: VG ceph_vg2 finished
Feb  2 12:41:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:43 np0005605476 systemd[1]: libpod-58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5.scope: Deactivated successfully.
Feb  2 12:41:43 np0005605476 podman[243255]: 2026-02-02 17:41:43.230923793 +0000 UTC m=+0.871715948 container died 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:41:43 np0005605476 systemd[1]: var-lib-containers-storage-overlay-89fe3631c7ad1295c347e3199f39226a597bb3e494f0dd979f9ed7b366802188-merged.mount: Deactivated successfully.
Feb  2 12:41:43 np0005605476 podman[243255]: 2026-02-02 17:41:43.278023998 +0000 UTC m=+0.918816093 container remove 58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:41:43 np0005605476 systemd[1]: libpod-conmon-58aa9bb009a33904469f9b30de417569feee187653789f5843a8ae8114b871a5.scope: Deactivated successfully.
Feb  2 12:41:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:41:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:41:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:41:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:41:46.631 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:41:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:41:46.632 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:41:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:41:46.633 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.623037931563108e-06 of space, bias 4.0, pg target 0.0019476455178757295 quantized to 16 (current 16)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:41:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:41:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:41:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:58 np0005605476 podman[243393]: 2026-02-02 17:41:58.612020872 +0000 UTC m=+0.055893753 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 12:41:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:41:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:01 np0005605476 podman[243414]: 2026-02-02 17:42:01.696233562 +0000 UTC m=+0.143680613 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:42:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:42:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2841208710' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:42:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:42:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2841208710' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:42:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.733205) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054131733275, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1435, "num_deletes": 251, "total_data_size": 2295759, "memory_usage": 2344224, "flush_reason": "Manual Compaction"}
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054131905170, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2263158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14909, "largest_seqno": 16343, "table_properties": {"data_size": 2256440, "index_size": 3853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13704, "raw_average_key_size": 19, "raw_value_size": 2243060, "raw_average_value_size": 3213, "num_data_blocks": 176, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770053979, "oldest_key_time": 1770053979, "file_creation_time": 1770054131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 171993 microseconds, and 5764 cpu microseconds.
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.905210) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2263158 bytes OK
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.905227) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.958220) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.958265) EVENT_LOG_v1 {"time_micros": 1770054131958256, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.958289) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2289447, prev total WAL file size 2289447, number of live WAL files 2.
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.958887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2210KB)], [35(7202KB)]
Feb  2 12:42:11 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054131958938, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9638791, "oldest_snapshot_seqno": -1}
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4022 keys, 7830899 bytes, temperature: kUnknown
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054132118999, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7830899, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7801930, "index_size": 17787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98215, "raw_average_key_size": 24, "raw_value_size": 7727127, "raw_average_value_size": 1921, "num_data_blocks": 753, "num_entries": 4022, "num_filter_entries": 4022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.119287) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7830899 bytes
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.122793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.2 rd, 48.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 4536, records dropped: 514 output_compression: NoCompression
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.122814) EVENT_LOG_v1 {"time_micros": 1770054132122804, "job": 16, "event": "compaction_finished", "compaction_time_micros": 160177, "compaction_time_cpu_micros": 11805, "output_level": 6, "num_output_files": 1, "total_output_size": 7830899, "num_input_records": 4536, "num_output_records": 4022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054132123096, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054132123894, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:11.958817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.123991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.124000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.124004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.124007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:12 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:42:12.124010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:42:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb  2 12:42:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:42:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:42:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.244 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.244 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 12:42:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.281 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.282 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.282 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 12:42:23 np0005605476 nova_compute[239846]: 2026-02-02 17:42:23.303 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.314 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.315 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.315 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.332 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.332 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.359 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.360 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.360 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.360 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.360 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:42:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:42:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058168766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:42:25 np0005605476 nova_compute[239846]: 2026-02-02 17:42:25.891 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.020 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.021 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.022 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.022 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.243 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.244 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.337 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.439 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.440 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.458 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.478 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 12:42:26 np0005605476 nova_compute[239846]: 2026-02-02 17:42:26.494 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:42:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:42:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066233581' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:42:27 np0005605476 nova_compute[239846]: 2026-02-02 17:42:27.021 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:42:27 np0005605476 nova_compute[239846]: 2026-02-02 17:42:27.026 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:42:27 np0005605476 nova_compute[239846]: 2026-02-02 17:42:27.040 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:42:27 np0005605476 nova_compute[239846]: 2026-02-02 17:42:27.041 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:42:27 np0005605476 nova_compute[239846]: 2026-02-02 17:42:27.041 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:42:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.951 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.951 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.952 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.952 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.952 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.953 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:42:28 np0005605476 nova_compute[239846]: 2026-02-02 17:42:28.953 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:42:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:29 np0005605476 podman[243486]: 2026-02-02 17:42:29.613156381 +0000 UTC m=+0.060576715 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  2 12:42:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:32 np0005605476 podman[243506]: 2026-02-02 17:42:32.629809471 +0000 UTC m=+0.076482633 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:42:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:42:36
Feb  2 12:42:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:42:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:42:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'backups', 'default.rgw.control']
Feb  2 12:42:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:42:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:42:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb  2 12:42:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb  2 12:42:38 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb  2 12:42:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb  2 12:42:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb  2 12:42:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb  2 12:42:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb  2 12:42:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb  2 12:42:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb  2 12:42:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.7 KiB/s wr, 19 op/s
Feb  2 12:42:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb  2 12:42:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb  2 12:42:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb  2 12:42:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 KiB/s wr, 22 op/s
Feb  2 12:42:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:42:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:42:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.792533273 +0000 UTC m=+0.030774647 container create 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:44 np0005605476 systemd[1]: Started libpod-conmon-3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701.scope.
Feb  2 12:42:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.861958536 +0000 UTC m=+0.100199950 container init 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.869791657 +0000 UTC m=+0.108033061 container start 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.873231263 +0000 UTC m=+0.111472647 container attach 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:42:44 np0005605476 fervent_morse[243765]: 167 167
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.779462285 +0000 UTC m=+0.017703669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:44 np0005605476 systemd[1]: libpod-3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701.scope: Deactivated successfully.
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.877083052 +0000 UTC m=+0.115324456 container died 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:42:44 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6a9eec22d35ba7fb72ad751e38b45d41148e3963a20bf382c92fd0353861e6ab-merged.mount: Deactivated successfully.
Feb  2 12:42:44 np0005605476 podman[243749]: 2026-02-02 17:42:44.916336216 +0000 UTC m=+0.154577590 container remove 3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:42:44 np0005605476 systemd[1]: libpod-conmon-3efcfdb6d17d3996f59d0f986526f45c87aaccf0f30db77af220bf014a43b701.scope: Deactivated successfully.
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.02807809 +0000 UTC m=+0.039751899 container create 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:42:45 np0005605476 systemd[1]: Started libpod-conmon-48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d.scope.
Feb  2 12:42:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.009806506 +0000 UTC m=+0.021480355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.111042525 +0000 UTC m=+0.122716344 container init 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.115370826 +0000 UTC m=+0.127044625 container start 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.118630328 +0000 UTC m=+0.130304127 container attach 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 12:42:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.6 MiB/s wr, 62 op/s
Feb  2 12:42:45 np0005605476 elastic_stonebraker[243805]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:42:45 np0005605476 elastic_stonebraker[243805]: --> All data devices are unavailable
Feb  2 12:42:45 np0005605476 systemd[1]: libpod-48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d.scope: Deactivated successfully.
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.542836034 +0000 UTC m=+0.554509833 container died 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:42:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d60467d1cf2de3891501c46ef7e015b868ff59c5723fb267420a3d7f859a8037-merged.mount: Deactivated successfully.
Feb  2 12:42:45 np0005605476 podman[243789]: 2026-02-02 17:42:45.578903259 +0000 UTC m=+0.590577058 container remove 48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_stonebraker, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:42:45 np0005605476 systemd[1]: libpod-conmon-48f32775034459b8382f740e15bb85d71c2c0a66d4918571e4de7cc55a6e021d.scope: Deactivated successfully.
Feb  2 12:42:45 np0005605476 podman[243902]: 2026-02-02 17:42:45.970017493 +0000 UTC m=+0.035982294 container create aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:42:46 np0005605476 systemd[1]: Started libpod-conmon-aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da.scope.
Feb  2 12:42:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:46.043048798 +0000 UTC m=+0.109013699 container init aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:46.051749263 +0000 UTC m=+0.117714104 container start aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:45.957085889 +0000 UTC m=+0.023050710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:46 np0005605476 lucid_almeida[243919]: 167 167
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:46.05556156 +0000 UTC m=+0.121526531 container attach aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:42:46 np0005605476 systemd[1]: libpod-aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da.scope: Deactivated successfully.
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:46.056794075 +0000 UTC m=+0.122758906 container died aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:42:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd80d939307f325bb3fbc9884136b59895ca6e6f8272dc58ace47bbb5d157f19-merged.mount: Deactivated successfully.
Feb  2 12:42:46 np0005605476 podman[243902]: 2026-02-02 17:42:46.095809752 +0000 UTC m=+0.161774593 container remove aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_almeida, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:42:46 np0005605476 systemd[1]: libpod-conmon-aac3bdb65fa5f7f92a05bb6edc10fd0a6624662f508cb4fc5a23e1798b9737da.scope: Deactivated successfully.
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.229127013 +0000 UTC m=+0.053577428 container create 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:42:46 np0005605476 systemd[1]: Started libpod-conmon-10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c.scope.
Feb  2 12:42:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e813e34b06f63211bd57c14a2d5405cb6961c2ec3cd4f3ba26ef099394d1bb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e813e34b06f63211bd57c14a2d5405cb6961c2ec3cd4f3ba26ef099394d1bb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e813e34b06f63211bd57c14a2d5405cb6961c2ec3cd4f3ba26ef099394d1bb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e813e34b06f63211bd57c14a2d5405cb6961c2ec3cd4f3ba26ef099394d1bb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.298596008 +0000 UTC m=+0.123046493 container init 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.206128766 +0000 UTC m=+0.030579191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.306415408 +0000 UTC m=+0.130865853 container start 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.310599156 +0000 UTC m=+0.135049601 container attach 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:42:46 np0005605476 gracious_curran[243959]: {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    "0": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "devices": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "/dev/loop3"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            ],
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_name": "ceph_lv0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_size": "21470642176",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "name": "ceph_lv0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "tags": {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_name": "ceph",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.crush_device_class": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.encrypted": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.objectstore": "bluestore",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_id": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.vdo": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.with_tpm": "0"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            },
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "vg_name": "ceph_vg0"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        }
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    ],
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    "1": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "devices": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "/dev/loop4"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            ],
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_name": "ceph_lv1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_size": "21470642176",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "name": "ceph_lv1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "tags": {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_name": "ceph",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.crush_device_class": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.encrypted": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.objectstore": "bluestore",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_id": "1",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.vdo": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.with_tpm": "0"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            },
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "vg_name": "ceph_vg1"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        }
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    ],
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    "2": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "devices": [
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "/dev/loop5"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            ],
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_name": "ceph_lv2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_size": "21470642176",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "name": "ceph_lv2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "tags": {
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.cluster_name": "ceph",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.crush_device_class": "",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.encrypted": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.objectstore": "bluestore",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osd_id": "2",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.vdo": "0",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:                "ceph.with_tpm": "0"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            },
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "type": "block",
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:            "vg_name": "ceph_vg2"
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:        }
Feb  2 12:42:46 np0005605476 gracious_curran[243959]:    ]
Feb  2 12:42:46 np0005605476 gracious_curran[243959]: }
Feb  2 12:42:46 np0005605476 systemd[1]: libpod-10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c.scope: Deactivated successfully.
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.617344267 +0000 UTC m=+0.441794762 container died 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb  2 12:42:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:42:46.633 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:42:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:42:46.635 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:42:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:42:46.635 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:42:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1e813e34b06f63211bd57c14a2d5405cb6961c2ec3cd4f3ba26ef099394d1bb0-merged.mount: Deactivated successfully.
Feb  2 12:42:46 np0005605476 podman[243943]: 2026-02-02 17:42:46.667645072 +0000 UTC m=+0.492095487 container remove 10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:42:46 np0005605476 systemd[1]: libpod-conmon-10dee199078bd57820d25c254e4e0ac6db9f4633eb93edce1fc05c3c5dd26e4c.scope: Deactivated successfully.
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.152564976 +0000 UTC m=+0.041796467 container create 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:42:47 np0005605476 systemd[1]: Started libpod-conmon-3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9.scope.
Feb  2 12:42:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.212337648 +0000 UTC m=+0.101569159 container init 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.218006768 +0000 UTC m=+0.107238249 container start 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.221564198 +0000 UTC m=+0.110795709 container attach 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:42:47 np0005605476 beautiful_hodgkin[244057]: 167 167
Feb  2 12:42:47 np0005605476 systemd[1]: libpod-3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9.scope: Deactivated successfully.
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.223852662 +0000 UTC m=+0.113084143 container died 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.135444165 +0000 UTC m=+0.024675676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a9e629dad9f7b8bbfb19922096b2eb203e7fdb38c154e2f42bcedfe47e77d221-merged.mount: Deactivated successfully.
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb  2 12:42:47 np0005605476 podman[244041]: 2026-02-02 17:42:47.257290773 +0000 UTC m=+0.146522254 container remove 3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hodgkin, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:42:47 np0005605476 systemd[1]: libpod-conmon-3eb9158e69ffb122a174e61bbb5a0e0b9f83cdcb162c692d7ad39cc839d519b9.scope: Deactivated successfully.
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658909973971211 of space, bias 1.0, pg target 0.19976729921913633 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6309246697400691e-06 of space, bias 4.0, pg target 0.001957109603688083 quantized to 16 (current 16)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:42:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:42:47 np0005605476 podman[244080]: 2026-02-02 17:42:47.386024905 +0000 UTC m=+0.037198857 container create 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:42:47 np0005605476 systemd[1]: Started libpod-conmon-24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b.scope.
Feb  2 12:42:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:42:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b802692747a4388e0adb5292ab160dfa3b10abc2f420b978da977384d29a66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b802692747a4388e0adb5292ab160dfa3b10abc2f420b978da977384d29a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b802692747a4388e0adb5292ab160dfa3b10abc2f420b978da977384d29a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b802692747a4388e0adb5292ab160dfa3b10abc2f420b978da977384d29a66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:42:47 np0005605476 podman[244080]: 2026-02-02 17:42:47.462421845 +0000 UTC m=+0.113595827 container init 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:42:47 np0005605476 podman[244080]: 2026-02-02 17:42:47.367683549 +0000 UTC m=+0.018857531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:42:47 np0005605476 podman[244080]: 2026-02-02 17:42:47.468503566 +0000 UTC m=+0.119677518 container start 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:42:47 np0005605476 podman[244080]: 2026-02-02 17:42:47.472107688 +0000 UTC m=+0.123281650 container attach 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:42:48 np0005605476 lvm[244172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:42:48 np0005605476 lvm[244172]: VG ceph_vg0 finished
Feb  2 12:42:48 np0005605476 lvm[244175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:42:48 np0005605476 lvm[244175]: VG ceph_vg1 finished
Feb  2 12:42:48 np0005605476 lvm[244177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:42:48 np0005605476 lvm[244177]: VG ceph_vg2 finished
Feb  2 12:42:48 np0005605476 ecstatic_ganguly[244096]: {}
Feb  2 12:42:48 np0005605476 systemd[1]: libpod-24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b.scope: Deactivated successfully.
Feb  2 12:42:48 np0005605476 podman[244080]: 2026-02-02 17:42:48.197876709 +0000 UTC m=+0.849050701 container died 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:42:48 np0005605476 systemd[1]: libpod-24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b.scope: Consumed 1.080s CPU time.
Feb  2 12:42:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-25b802692747a4388e0adb5292ab160dfa3b10abc2f420b978da977384d29a66-merged.mount: Deactivated successfully.
Feb  2 12:42:48 np0005605476 podman[244080]: 2026-02-02 17:42:48.254964815 +0000 UTC m=+0.906138777 container remove 24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:42:48 np0005605476 systemd[1]: libpod-conmon-24477a3e3fbe8bf26c3381dc6659648466db2fa6de031f387ac44a9473d8cb6b.scope: Deactivated successfully.
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:48 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:42:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.5 MiB/s wr, 34 op/s
Feb  2 12:42:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb  2 12:42:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb  2 12:42:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb  2 12:42:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.5 MiB/s wr, 28 op/s
Feb  2 12:42:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.1 MiB/s wr, 26 op/s
Feb  2 12:42:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:42:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:42:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:43:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:00.543 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:43:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:00.545 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:43:00 np0005605476 podman[244220]: 2026-02-02 17:43:00.646301769 +0000 UTC m=+0.090623221 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb  2 12:43:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb  2 12:43:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb  2 12:43:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb  2 12:43:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:43:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb  2 12:43:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb  2 12:43:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb  2 12:43:03 np0005605476 podman[244240]: 2026-02-02 17:43:03.036533764 +0000 UTC m=+0.148041506 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:43:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:43:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb  2 12:43:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb  2 12:43:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb  2 12:43:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737125926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737125926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:43:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.2 KiB/s wr, 12 op/s
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.7 KiB/s wr, 16 op/s
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:07 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:07.547 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:43:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389787149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389787149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb  2 12:43:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb  2 12:43:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb  2 12:43:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Feb  2 12:43:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 12:43:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb  2 12:43:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb  2 12:43:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb  2 12:43:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2484828030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2484828030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.3 KiB/s wr, 77 op/s
Feb  2 12:43:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941128595' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/941128595' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 62 op/s
Feb  2 12:43:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.0 KiB/s wr, 76 op/s
Feb  2 12:43:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.3 KiB/s wr, 76 op/s
Feb  2 12:43:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb  2 12:43:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb  2 12:43:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb  2 12:43:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb  2 12:43:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb  2 12:43:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb  2 12:43:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Feb  2 12:43:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb  2 12:43:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb  2 12:43:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb  2 12:43:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb  2 12:43:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb  2 12:43:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb  2 12:43:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.7 KiB/s wr, 89 op/s
Feb  2 12:43:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb  2 12:43:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb  2 12:43:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb  2 12:43:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.9 KiB/s wr, 81 op/s
Feb  2 12:43:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069360488' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069360488' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb  2 12:43:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb  2 12:43:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb  2 12:43:25 np0005605476 nova_compute[239846]: 2026-02-02 17:43:25.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 4.0 KiB/s wr, 122 op/s
Feb  2 12:43:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb  2 12:43:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb  2 12:43:26 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb  2 12:43:26 np0005605476 nova_compute[239846]: 2026-02-02 17:43:26.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:43:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 4.7 KiB/s wr, 122 op/s
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.295 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.295 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.295 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.323 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.324 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.324 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.324 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.324 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:43:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:43:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953567242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.839 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.974 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.975 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.975 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:43:27 np0005605476 nova_compute[239846]: 2026-02-02 17:43:27.976 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.036 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.037 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.060 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:43:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb  2 12:43:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb  2 12:43:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb  2 12:43:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:43:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371472170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.591 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.596 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.630 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.632 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:43:28 np0005605476 nova_compute[239846]: 2026-02-02 17:43:28.633 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb  2 12:43:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.8 KiB/s wr, 66 op/s
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb  2 12:43:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.580 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.580 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.580 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.580 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.581 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:43:30 np0005605476 nova_compute[239846]: 2026-02-02 17:43:30.581 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:43:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb  2 12:43:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb  2 12:43:31 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb  2 12:43:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.0 KiB/s wr, 118 op/s
Feb  2 12:43:31 np0005605476 podman[244310]: 2026-02-02 17:43:31.60726945 +0000 UTC m=+0.049556346 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:43:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1820191253' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1820191253' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb  2 12:43:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb  2 12:43:33 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb  2 12:43:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 5.9 KiB/s wr, 111 op/s
Feb  2 12:43:33 np0005605476 podman[244330]: 2026-02-02 17:43:33.637536274 +0000 UTC m=+0.084491528 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb  2 12:43:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb  2 12:43:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Feb  2 12:43:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb  2 12:43:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb  2 12:43:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb  2 12:43:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3392469459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3392469459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:43:36
Feb  2 12:43:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:43:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:43:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'backups']
Feb  2 12:43:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:43:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584657333' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584657333' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 7.7 KiB/s wr, 168 op/s
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:43:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:43:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.3 KiB/s wr, 114 op/s
Feb  2 12:43:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb  2 12:43:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb  2 12:43:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb  2 12:43:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 5.9 KiB/s wr, 133 op/s
Feb  2 12:43:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb  2 12:43:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb  2 12:43:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb  2 12:43:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Feb  2 12:43:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb  2 12:43:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb  2 12:43:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb  2 12:43:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Feb  2 12:43:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4152065819' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4152065819' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:46.633 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:43:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:46.634 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:43:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:43:46.634 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 KiB/s wr, 75 op/s
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.230223620738214e-06 of space, bias 1.0, pg target 0.0006690670862214643 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659013836723974 of space, bias 1.0, pg target 0.19977041510171922 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1523176529696734e-06 of space, bias 4.0, pg target 0.0013827811835636081 quantized to 16 (current 16)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:43:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:43:48 np0005605476 podman[244449]: 2026-02-02 17:43:48.912856723 +0000 UTC m=+0.056447472 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:43:48 np0005605476 podman[244449]: 2026-02-02 17:43:48.98459338 +0000 UTC m=+0.128184129 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:43:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Feb  2 12:43:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:43:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:43:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.654716066 +0000 UTC m=+0.039316335 container create 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:43:50 np0005605476 systemd[1]: Started libpod-conmon-3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9.scope.
Feb  2 12:43:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.718777899 +0000 UTC m=+0.103378258 container init 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.724040916 +0000 UTC m=+0.108641185 container start 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:43:50 np0005605476 loving_visvesvaraya[244790]: 167 167
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.727614065 +0000 UTC m=+0.112214354 container attach 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:50 np0005605476 systemd[1]: libpod-3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9.scope: Deactivated successfully.
Feb  2 12:43:50 np0005605476 conmon[244790]: conmon 3970efde3b6ce1db6538 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9.scope/container/memory.events
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.729534578 +0000 UTC m=+0.114134847 container died 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.635902942 +0000 UTC m=+0.020503251 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ab456bd5739a5316ac49409fd74b15b6db343b43a20b66bae565d9eaaedeead7-merged.mount: Deactivated successfully.
Feb  2 12:43:50 np0005605476 podman[244773]: 2026-02-02 17:43:50.761322003 +0000 UTC m=+0.145922272 container remove 3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_visvesvaraya, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:43:50 np0005605476 systemd[1]: libpod-conmon-3970efde3b6ce1db6538fb0996e42ad745931d40f478bd74a4417bd4fb1059a9.scope: Deactivated successfully.
Feb  2 12:43:50 np0005605476 podman[244815]: 2026-02-02 17:43:50.891499567 +0000 UTC m=+0.040931141 container create 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:50 np0005605476 systemd[1]: Started libpod-conmon-900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6.scope.
Feb  2 12:43:50 np0005605476 podman[244815]: 2026-02-02 17:43:50.871883291 +0000 UTC m=+0.021314865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:51 np0005605476 podman[244815]: 2026-02-02 17:43:51.009020958 +0000 UTC m=+0.158452492 container init 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:43:51 np0005605476 podman[244815]: 2026-02-02 17:43:51.022629257 +0000 UTC m=+0.172060831 container start 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:51 np0005605476 podman[244815]: 2026-02-02 17:43:51.026508365 +0000 UTC m=+0.175939919 container attach 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:43:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Feb  2 12:43:51 np0005605476 bold_banzai[244831]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:43:51 np0005605476 bold_banzai[244831]: --> All data devices are unavailable
Feb  2 12:43:51 np0005605476 systemd[1]: libpod-900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6.scope: Deactivated successfully.
Feb  2 12:43:51 np0005605476 podman[244815]: 2026-02-02 17:43:51.474440003 +0000 UTC m=+0.623871547 container died 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c9f24aea30a9b9fbe1aea78187d0573e0ff7a61cc179b61999f4c99739e22bd6-merged.mount: Deactivated successfully.
Feb  2 12:43:51 np0005605476 podman[244815]: 2026-02-02 17:43:51.513670285 +0000 UTC m=+0.663101819 container remove 900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_banzai, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:43:51 np0005605476 systemd[1]: libpod-conmon-900b8424913daae73f4ea2f6ee4f04bd26e8a3ca8810ce96525e78b0fe23d3f6.scope: Deactivated successfully.
Feb  2 12:43:51 np0005605476 podman[244930]: 2026-02-02 17:43:51.931569757 +0000 UTC m=+0.047919785 container create dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:43:51 np0005605476 systemd[1]: Started libpod-conmon-dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4.scope.
Feb  2 12:43:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:52.000729122 +0000 UTC m=+0.117079170 container init dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:52.00570408 +0000 UTC m=+0.122054078 container start dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:51.910257524 +0000 UTC m=+0.026607552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:52.008821637 +0000 UTC m=+0.125171715 container attach dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:43:52 np0005605476 brave_greider[244946]: 167 167
Feb  2 12:43:52 np0005605476 systemd[1]: libpod-dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4.scope: Deactivated successfully.
Feb  2 12:43:52 np0005605476 conmon[244946]: conmon dd1e5126c86701c7a309 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4.scope/container/memory.events
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:52.011933884 +0000 UTC m=+0.128283882 container died dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:43:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e30824d7c550f7ac4bf6118a01cac3be3f718f43ff168bd6cdd3e42cfc981f9e-merged.mount: Deactivated successfully.
Feb  2 12:43:52 np0005605476 podman[244930]: 2026-02-02 17:43:52.04484506 +0000 UTC m=+0.161195058 container remove dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:43:52 np0005605476 systemd[1]: libpod-conmon-dd1e5126c86701c7a3091a65d88d14df3739cda84b21cab8e47e2888ef977cf4.scope: Deactivated successfully.
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.205557533 +0000 UTC m=+0.042962557 container create 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:52 np0005605476 systemd[1]: Started libpod-conmon-7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514.scope.
Feb  2 12:43:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9497b48d9950be6e415a7979825028fac537e4f0031c4e4a0dc6b7bfe75c3071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9497b48d9950be6e415a7979825028fac537e4f0031c4e4a0dc6b7bfe75c3071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9497b48d9950be6e415a7979825028fac537e4f0031c4e4a0dc6b7bfe75c3071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9497b48d9950be6e415a7979825028fac537e4f0031c4e4a0dc6b7bfe75c3071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.282446183 +0000 UTC m=+0.119851237 container init 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.189242879 +0000 UTC m=+0.026647903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.29059075 +0000 UTC m=+0.127995764 container start 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.2938384 +0000 UTC m=+0.131243444 container attach 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]: {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    "0": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "devices": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "/dev/loop3"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            ],
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_name": "ceph_lv0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_size": "21470642176",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "name": "ceph_lv0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "tags": {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_name": "ceph",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.crush_device_class": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.encrypted": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.objectstore": "bluestore",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_id": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.vdo": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.with_tpm": "0"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            },
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "vg_name": "ceph_vg0"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        }
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    ],
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    "1": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "devices": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "/dev/loop4"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            ],
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_name": "ceph_lv1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_size": "21470642176",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "name": "ceph_lv1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "tags": {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_name": "ceph",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.crush_device_class": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.encrypted": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.objectstore": "bluestore",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_id": "1",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.vdo": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.with_tpm": "0"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            },
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "vg_name": "ceph_vg1"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        }
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    ],
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    "2": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "devices": [
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "/dev/loop5"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            ],
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_name": "ceph_lv2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_size": "21470642176",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "name": "ceph_lv2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "tags": {
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.cluster_name": "ceph",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.crush_device_class": "",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.encrypted": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.objectstore": "bluestore",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osd_id": "2",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.vdo": "0",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:                "ceph.with_tpm": "0"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            },
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "type": "block",
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:            "vg_name": "ceph_vg2"
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:        }
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]:    ]
Feb  2 12:43:52 np0005605476 vibrant_ptolemy[244987]: }
Feb  2 12:43:52 np0005605476 systemd[1]: libpod-7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514.scope: Deactivated successfully.
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.589578692 +0000 UTC m=+0.426983716 container died 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:43:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9497b48d9950be6e415a7979825028fac537e4f0031c4e4a0dc6b7bfe75c3071-merged.mount: Deactivated successfully.
Feb  2 12:43:52 np0005605476 podman[244970]: 2026-02-02 17:43:52.629556975 +0000 UTC m=+0.466961989 container remove 7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:43:52 np0005605476 systemd[1]: libpod-conmon-7b6db406f1a6a87b8c2fb20bc22c6c698bf082b2d45f1b35d0148bb0a7afb514.scope: Deactivated successfully.
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.062753143 +0000 UTC m=+0.051439413 container create e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:43:53 np0005605476 systemd[1]: Started libpod-conmon-e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5.scope.
Feb  2 12:43:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.138506272 +0000 UTC m=+0.127192612 container init e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.043548128 +0000 UTC m=+0.032234438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.143219333 +0000 UTC m=+0.131905593 container start e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.146450923 +0000 UTC m=+0.135137283 container attach e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:43:53 np0005605476 hungry_wilbur[245087]: 167 167
Feb  2 12:43:53 np0005605476 systemd[1]: libpod-e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5.scope: Deactivated successfully.
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.148327815 +0000 UTC m=+0.137014085 container died e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:43:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-13f3e2d25eb21d15775d8b21673bcb76abcdda90fcad25da51a602483f7a0573-merged.mount: Deactivated successfully.
Feb  2 12:43:53 np0005605476 podman[245071]: 2026-02-02 17:43:53.184353358 +0000 UTC m=+0.173039618 container remove e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:43:53 np0005605476 systemd[1]: libpod-conmon-e42cb211d5bfeea97a097180c05aa04ac1668f35781dec0c8d420d0a643b34e5.scope: Deactivated successfully.
Feb  2 12:43:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Feb  2 12:43:53 np0005605476 podman[245110]: 2026-02-02 17:43:53.330918927 +0000 UTC m=+0.045383154 container create d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:53 np0005605476 systemd[1]: Started libpod-conmon-d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7.scope.
Feb  2 12:43:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:43:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72742e79a6783dcaa0fe02cea651655d1dce35d41a2fc6d15071edd467c658a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72742e79a6783dcaa0fe02cea651655d1dce35d41a2fc6d15071edd467c658a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72742e79a6783dcaa0fe02cea651655d1dce35d41a2fc6d15071edd467c658a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72742e79a6783dcaa0fe02cea651655d1dce35d41a2fc6d15071edd467c658a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:43:53 np0005605476 podman[245110]: 2026-02-02 17:43:53.310821848 +0000 UTC m=+0.025286105 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:43:53 np0005605476 podman[245110]: 2026-02-02 17:43:53.414412361 +0000 UTC m=+0.128876608 container init d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:43:53 np0005605476 podman[245110]: 2026-02-02 17:43:53.429000507 +0000 UTC m=+0.143464764 container start d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:43:53 np0005605476 podman[245110]: 2026-02-02 17:43:53.432744382 +0000 UTC m=+0.147208629 container attach d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:43:54 np0005605476 lvm[245206]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:43:54 np0005605476 lvm[245206]: VG ceph_vg1 finished
Feb  2 12:43:54 np0005605476 lvm[245205]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:43:54 np0005605476 lvm[245205]: VG ceph_vg0 finished
Feb  2 12:43:54 np0005605476 lvm[245208]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:43:54 np0005605476 lvm[245208]: VG ceph_vg2 finished
Feb  2 12:43:54 np0005605476 vibrant_neumann[245127]: {}
Feb  2 12:43:54 np0005605476 systemd[1]: libpod-d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7.scope: Deactivated successfully.
Feb  2 12:43:54 np0005605476 systemd[1]: libpod-d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7.scope: Consumed 1.105s CPU time.
Feb  2 12:43:54 np0005605476 podman[245212]: 2026-02-02 17:43:54.180919056 +0000 UTC m=+0.027588909 container died d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:43:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-72742e79a6783dcaa0fe02cea651655d1dce35d41a2fc6d15071edd467c658a6-merged.mount: Deactivated successfully.
Feb  2 12:43:54 np0005605476 podman[245212]: 2026-02-02 17:43:54.235432303 +0000 UTC m=+0.082102126 container remove d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:43:54 np0005605476 systemd[1]: libpod-conmon-d48639a74aa1892f90a62788b16d298f6d8d9ffcb5b85cf6b1e7e0431e0e72b7.scope: Deactivated successfully.
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb  2 12:43:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb  2 12:43:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Feb  2 12:43:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3846487007' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3846487007' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 16 op/s
Feb  2 12:43:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:43:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769990042' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:43:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:43:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769990042' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:43:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 614 B/s wr, 29 op/s
Feb  2 12:43:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Feb  2 12:44:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb  2 12:44:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb  2 12:44:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb  2 12:44:02 np0005605476 podman[245253]: 2026-02-02 17:44:02.620283791 +0000 UTC m=+0.061737059 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:44:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Feb  2 12:44:04 np0005605476 podman[245273]: 2026-02-02 17:44:04.630693009 +0000 UTC m=+0.083253347 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3491286802' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3491286802' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833739575' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833739575' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854056196' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854056196' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 60 op/s
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873738948' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873738948' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Feb  2 12:44:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:10.148 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:44:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:10.150 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:44:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:10.151 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:44:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.6 KiB/s wr, 92 op/s
Feb  2 12:44:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Feb  2 12:44:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb  2 12:44:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb  2 12:44:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb  2 12:44:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 49 op/s
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb  2 12:44:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.5 KiB/s wr, 48 op/s
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475876559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1475876559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444487163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444487163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4076325804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4076325804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Feb  2 12:44:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356381000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356381000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2637624862' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2637624862' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.7 KiB/s wr, 80 op/s
Feb  2 12:44:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 76 op/s
Feb  2 12:44:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.0 KiB/s wr, 87 op/s
Feb  2 12:44:26 np0005605476 nova_compute[239846]: 2026-02-02 17:44:26.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.9 KiB/s wr, 85 op/s
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:44:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.4 KiB/s wr, 72 op/s
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.332 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.333 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.334 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.334 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.372 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.372 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.373 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.373 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.373 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266119815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266119815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:44:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121747430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:44:29 np0005605476 nova_compute[239846]: 2026-02-02 17:44:29.876 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.025 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.026 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.027 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.027 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.099 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.099 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.126 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012995518' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.692 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.698 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.718 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.721 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:44:30 np0005605476 nova_compute[239846]: 2026-02-02 17:44:30.721 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244648249' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244648249' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077001930' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077001930' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 59 op/s
Feb  2 12:44:32 np0005605476 nova_compute[239846]: 2026-02-02 17:44:32.630 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:32 np0005605476 nova_compute[239846]: 2026-02-02 17:44:32.630 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:32 np0005605476 nova_compute[239846]: 2026-02-02 17:44:32.631 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:44:32 np0005605476 nova_compute[239846]: 2026-02-02 17:44:32.631 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:44:33 np0005605476 podman[245343]: 2026-02-02 17:44:33.111978672 +0000 UTC m=+0.062891911 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 12:44:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Feb  2 12:44:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624387067' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624387067' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 KiB/s wr, 61 op/s
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3129687134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3129687134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:35 np0005605476 podman[245364]: 2026-02-02 17:44:35.615720212 +0000 UTC m=+0.067803638 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:44:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:44:36
Feb  2 12:44:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:44:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:44:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', 'vms']
Feb  2 12:44:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:44:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:44:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271774649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271774649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852737557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Feb  2 12:44:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852737557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.4 KiB/s wr, 84 op/s
Feb  2 12:44:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Feb  2 12:44:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 KiB/s wr, 71 op/s
Feb  2 12:44:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1752973261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1752973261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:46.634 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:44:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:46.635 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:44:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:44:46.635 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:44:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392923108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392923108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.703816038234676e-06 of space, bias 1.0, pg target 0.0008111448114704029 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658830796087938 of space, bias 1.0, pg target 0.19976492388263814 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4280429876602866e-06 of space, bias 4.0, pg target 0.0017136515851923439 quantized to 16 (current 16)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:44:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:44:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.3 KiB/s wr, 41 op/s
Feb  2 12:44:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:44:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854725685' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:44:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:44:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854725685' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:44:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Feb  2 12:44:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Feb  2 12:44:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.427462121 +0000 UTC m=+0.059561128 container create b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:44:55 np0005605476 systemd[1]: Started libpod-conmon-b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79.scope.
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.395966755 +0000 UTC m=+0.028065812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.516727856 +0000 UTC m=+0.148826943 container init b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.524326108 +0000 UTC m=+0.156425145 container start b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:44:55 np0005605476 xenodochial_galois[245551]: 167 167
Feb  2 12:44:55 np0005605476 systemd[1]: libpod-b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79.scope: Deactivated successfully.
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.531319682 +0000 UTC m=+0.163418729 container attach b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.531748634 +0000 UTC m=+0.163847691 container died b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:44:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-95e437c3421d9e05c5b8e3f4178a98c8cf8160bbd206749b76e9f7cc4549db18-merged.mount: Deactivated successfully.
Feb  2 12:44:55 np0005605476 podman[245534]: 2026-02-02 17:44:55.600007534 +0000 UTC m=+0.232106581 container remove b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:44:55 np0005605476 systemd[1]: libpod-conmon-b1d1aeb7b89589c586ae3fb073c59b8dec125a569bbba5f45bb1837358a2cf79.scope: Deactivated successfully.
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:44:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:44:55 np0005605476 podman[245577]: 2026-02-02 17:44:55.777240317 +0000 UTC m=+0.056707979 container create 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:44:55 np0005605476 systemd[1]: Started libpod-conmon-8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087.scope.
Feb  2 12:44:55 np0005605476 podman[245577]: 2026-02-02 17:44:55.748040895 +0000 UTC m=+0.027508577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:55 np0005605476 podman[245577]: 2026-02-02 17:44:55.889594205 +0000 UTC m=+0.169061927 container init 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:44:55 np0005605476 podman[245577]: 2026-02-02 17:44:55.896491087 +0000 UTC m=+0.175958719 container start 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb  2 12:44:55 np0005605476 podman[245577]: 2026-02-02 17:44:55.89984905 +0000 UTC m=+0.179316732 container attach 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:44:56 np0005605476 focused_herschel[245594]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:44:56 np0005605476 focused_herschel[245594]: --> All data devices are unavailable
Feb  2 12:44:56 np0005605476 systemd[1]: libpod-8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087.scope: Deactivated successfully.
Feb  2 12:44:56 np0005605476 podman[245577]: 2026-02-02 17:44:56.336424682 +0000 UTC m=+0.615892364 container died 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:44:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-dabd72faf57734f247e43315fd7bd7c99dc31e11ba53c7631699792920310249-merged.mount: Deactivated successfully.
Feb  2 12:44:56 np0005605476 podman[245577]: 2026-02-02 17:44:56.376675972 +0000 UTC m=+0.656143614 container remove 8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:44:56 np0005605476 systemd[1]: libpod-conmon-8ffcd3e642872b614a8a5672700e71e0795022496db57a6bd90107113cb1e087.scope: Deactivated successfully.
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.848749643 +0000 UTC m=+0.052551764 container create b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:44:56 np0005605476 systemd[1]: Started libpod-conmon-b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd.scope.
Feb  2 12:44:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.821930386 +0000 UTC m=+0.025732557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.921559739 +0000 UTC m=+0.125361890 container init b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.929243623 +0000 UTC m=+0.133045774 container start b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.932800582 +0000 UTC m=+0.136602743 container attach b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:44:56 np0005605476 nifty_shamir[245703]: 167 167
Feb  2 12:44:56 np0005605476 systemd[1]: libpod-b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd.scope: Deactivated successfully.
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.934136049 +0000 UTC m=+0.137938230 container died b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:44:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a63197bafc22a5e498f6bcf2039cd06ec9d5bbe4770ea3e833050e0e7728a8b4-merged.mount: Deactivated successfully.
Feb  2 12:44:56 np0005605476 podman[245686]: 2026-02-02 17:44:56.97115922 +0000 UTC m=+0.174961341 container remove b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:44:56 np0005605476 systemd[1]: libpod-conmon-b0a61cc109cf1bf4761d02217dd6ff5dcaedbd56719da08af08b27f06663b6dd.scope: Deactivated successfully.
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.101137488 +0000 UTC m=+0.031360954 container create 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:44:57 np0005605476 systemd[1]: Started libpod-conmon-4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223.scope.
Feb  2 12:44:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e76243ed058cf1bfc84692be66a5d16d7cb3ddad3114120293e1165420e3ecd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e76243ed058cf1bfc84692be66a5d16d7cb3ddad3114120293e1165420e3ecd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e76243ed058cf1bfc84692be66a5d16d7cb3ddad3114120293e1165420e3ecd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e76243ed058cf1bfc84692be66a5d16d7cb3ddad3114120293e1165420e3ecd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.170447147 +0000 UTC m=+0.100670683 container init 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.175208649 +0000 UTC m=+0.105432115 container start 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.178544472 +0000 UTC m=+0.108767988 container attach 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.088526537 +0000 UTC m=+0.018750023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]: {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    "0": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "devices": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "/dev/loop3"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            ],
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_name": "ceph_lv0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_size": "21470642176",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "name": "ceph_lv0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "tags": {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_name": "ceph",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.crush_device_class": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.encrypted": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.objectstore": "bluestore",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_id": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.vdo": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.with_tpm": "0"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            },
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "vg_name": "ceph_vg0"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        }
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    ],
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    "1": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "devices": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "/dev/loop4"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            ],
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_name": "ceph_lv1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_size": "21470642176",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "name": "ceph_lv1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "tags": {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_name": "ceph",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.crush_device_class": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.encrypted": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.objectstore": "bluestore",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_id": "1",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.vdo": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.with_tpm": "0"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            },
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "vg_name": "ceph_vg1"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        }
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    ],
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    "2": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "devices": [
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "/dev/loop5"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            ],
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_name": "ceph_lv2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_size": "21470642176",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "name": "ceph_lv2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "tags": {
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.cluster_name": "ceph",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.crush_device_class": "",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.encrypted": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.objectstore": "bluestore",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osd_id": "2",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.vdo": "0",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:                "ceph.with_tpm": "0"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            },
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "type": "block",
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:            "vg_name": "ceph_vg2"
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:        }
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]:    ]
Feb  2 12:44:57 np0005605476 romantic_faraday[245744]: }
Feb  2 12:44:57 np0005605476 systemd[1]: libpod-4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223.scope: Deactivated successfully.
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.462874607 +0000 UTC m=+0.393098083 container died 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:44:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8e76243ed058cf1bfc84692be66a5d16d7cb3ddad3114120293e1165420e3ecd-merged.mount: Deactivated successfully.
Feb  2 12:44:57 np0005605476 podman[245727]: 2026-02-02 17:44:57.501782859 +0000 UTC m=+0.432006335 container remove 4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:44:57 np0005605476 systemd[1]: libpod-conmon-4b097e68496fa2c7e8aed3a16c7963d87bb801406232fc09c44b344d486c8223.scope: Deactivated successfully.
Feb  2 12:44:57 np0005605476 podman[245827]: 2026-02-02 17:44:57.934625848 +0000 UTC m=+0.050542658 container create 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:44:57 np0005605476 systemd[1]: Started libpod-conmon-110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff.scope.
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:57.909041385 +0000 UTC m=+0.024958255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:58.017458573 +0000 UTC m=+0.133375343 container init 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:58.02237688 +0000 UTC m=+0.138293650 container start 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:44:58 np0005605476 boring_keldysh[245844]: 167 167
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:58.02560475 +0000 UTC m=+0.141521540 container attach 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:44:58 np0005605476 systemd[1]: libpod-110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff.scope: Deactivated successfully.
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:58.026979378 +0000 UTC m=+0.142896148 container died 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:44:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-74858b154d152ec0b7c195f44b31b0b60349ff97eb87a8626147e7efa4121044-merged.mount: Deactivated successfully.
Feb  2 12:44:58 np0005605476 podman[245827]: 2026-02-02 17:44:58.060629005 +0000 UTC m=+0.176545775 container remove 110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:44:58 np0005605476 systemd[1]: libpod-conmon-110d3c34ceff00a3b50937f5070f159e73109e0d489ce501dea6f39b56ec46ff.scope: Deactivated successfully.
Feb  2 12:44:58 np0005605476 podman[245868]: 2026-02-02 17:44:58.206529246 +0000 UTC m=+0.050334032 container create 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:44:58 np0005605476 systemd[1]: Started libpod-conmon-78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b.scope.
Feb  2 12:44:58 np0005605476 podman[245868]: 2026-02-02 17:44:58.185548512 +0000 UTC m=+0.029353318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:44:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:44:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449defe5dae826f14667effc2d0ccafe1284d87cc987f9c0eb44dcc84c0b206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449defe5dae826f14667effc2d0ccafe1284d87cc987f9c0eb44dcc84c0b206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449defe5dae826f14667effc2d0ccafe1284d87cc987f9c0eb44dcc84c0b206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b449defe5dae826f14667effc2d0ccafe1284d87cc987f9c0eb44dcc84c0b206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:44:58 np0005605476 podman[245868]: 2026-02-02 17:44:58.3266927 +0000 UTC m=+0.170497576 container init 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Feb  2 12:44:58 np0005605476 podman[245868]: 2026-02-02 17:44:58.333210921 +0000 UTC m=+0.177015747 container start 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:44:58 np0005605476 podman[245868]: 2026-02-02 17:44:58.337628694 +0000 UTC m=+0.181433580 container attach 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:44:58 np0005605476 lvm[245963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:44:58 np0005605476 lvm[245963]: VG ceph_vg0 finished
Feb  2 12:44:58 np0005605476 lvm[245964]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:44:58 np0005605476 lvm[245964]: VG ceph_vg1 finished
Feb  2 12:44:59 np0005605476 lvm[245966]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:44:59 np0005605476 lvm[245966]: VG ceph_vg2 finished
Feb  2 12:44:59 np0005605476 focused_easley[245885]: {}
Feb  2 12:44:59 np0005605476 systemd[1]: libpod-78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b.scope: Deactivated successfully.
Feb  2 12:44:59 np0005605476 systemd[1]: libpod-78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b.scope: Consumed 1.123s CPU time.
Feb  2 12:44:59 np0005605476 podman[245868]: 2026-02-02 17:44:59.126462001 +0000 UTC m=+0.970266837 container died 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:44:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b449defe5dae826f14667effc2d0ccafe1284d87cc987f9c0eb44dcc84c0b206-merged.mount: Deactivated successfully.
Feb  2 12:44:59 np0005605476 podman[245868]: 2026-02-02 17:44:59.17241154 +0000 UTC m=+1.016216356 container remove 78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:44:59 np0005605476 systemd[1]: libpod-conmon-78e87b2a29d3230228fe0b1d668929326fdf7a90d73f522259530d73a625512b.scope: Deactivated successfully.
Feb  2 12:44:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:44:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:44:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:44:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:44:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Feb  2 12:44:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:45:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:45:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Feb  2 12:45:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Feb  2 12:45:03 np0005605476 podman[246005]: 2026-02-02 17:45:03.601716587 +0000 UTC m=+0.050107526 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent)
Feb  2 12:45:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2530004547' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2530004547' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Feb  2 12:45:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226033090' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb  2 12:45:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb  2 12:45:06 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb  2 12:45:06 np0005605476 podman[246024]: 2026-02-02 17:45:06.650920459 +0000 UTC m=+0.101214588 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 409 B/s wr, 3 op/s
Feb  2 12:45:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb  2 12:45:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb  2 12:45:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.104 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.104 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.122 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.234 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.235 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.244 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.245 239853 INFO nova.compute.claims [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.344 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.447572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308447600, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2271, "num_deletes": 259, "total_data_size": 3408184, "memory_usage": 3461680, "flush_reason": "Manual Compaction"}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308468721, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3324011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16344, "largest_seqno": 18614, "table_properties": {"data_size": 3313558, "index_size": 6691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21897, "raw_average_key_size": 20, "raw_value_size": 3292455, "raw_average_value_size": 3120, "num_data_blocks": 296, "num_entries": 1055, "num_filter_entries": 1055, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054132, "oldest_key_time": 1770054132, "file_creation_time": 1770054308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 21220 microseconds, and 4925 cpu microseconds.
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.468786) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3324011 bytes OK
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.468811) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.471369) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.471384) EVENT_LOG_v1 {"time_micros": 1770054308471380, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.471402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3398484, prev total WAL file size 3398484, number of live WAL files 2.
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.471898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3246KB)], [38(7647KB)]
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308471968, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11154910, "oldest_snapshot_seqno": -1}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4551 keys, 9393878 bytes, temperature: kUnknown
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308516665, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9393878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9359632, "index_size": 21784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 110201, "raw_average_key_size": 24, "raw_value_size": 9273712, "raw_average_value_size": 2037, "num_data_blocks": 919, "num_entries": 4551, "num_filter_entries": 4551, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.516883) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9393878 bytes
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.520558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.2 rd, 209.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5077, records dropped: 526 output_compression: NoCompression
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.520580) EVENT_LOG_v1 {"time_micros": 1770054308520571, "job": 18, "event": "compaction_finished", "compaction_time_micros": 44762, "compaction_time_cpu_micros": 19585, "output_level": 6, "num_output_files": 1, "total_output_size": 9393878, "num_input_records": 5077, "num_output_records": 4551, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308520897, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054308521400, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.471817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.521470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.521475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.521476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.521478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:45:08.521479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762140948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.924 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.929 239853 DEBUG nova.compute.provider_tree [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.948 239853 DEBUG nova.scheduler.client.report [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.969 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:08 np0005605476 nova_compute[239846]: 2026-02-02 17:45:08.970 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.008 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.008 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.036 239853 INFO nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.055 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.165 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.167 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.167 239853 INFO nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Creating image(s)#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.191 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.213 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.234 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.237 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.238 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Feb  2 12:45:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb  2 12:45:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb  2 12:45:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.797 239853 WARNING oslo_policy.policy [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.798 239853 WARNING oslo_policy.policy [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.802 239853 DEBUG nova.policy [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f2b1366a8ee34a0e9437bb253f37a284', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28896be470ca44d887bb24e9da819ee1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:45:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:09 np0005605476 nova_compute[239846]: 2026-02-02 17:45:09.988 239853 DEBUG nova.virt.libvirt.imagebackend [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Image locations are: [{'url': 'rbd://eb48d0ef-3496-563c-b73d-661fb962013e/images/88ad7b87-724c-4a9f-a946-6c9736783609/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://eb48d0ef-3496-563c-b73d-661fb962013e/images/88ad7b87-724c-4a9f-a946-6c9736783609/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb  2 12:45:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:10.595 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:10.596 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.781 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.829 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.part --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.830 239853 DEBUG nova.virt.images [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] 88ad7b87-724c-4a9f-a946-6c9736783609 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.832 239853 DEBUG nova.privsep.utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.832 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.part /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.896 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Successfully created port: 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.988 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.part /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.converted" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:10 np0005605476 nova_compute[239846]: 2026-02-02 17:45:10.992 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:11 np0005605476 nova_compute[239846]: 2026-02-02 17:45:11.042 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:11 np0005605476 nova_compute[239846]: 2026-02-02 17:45:11.043 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:11 np0005605476 nova_compute[239846]: 2026-02-02 17:45:11.062 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:11 np0005605476 nova_compute[239846]: 2026-02-02 17:45:11.066 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 4af1978a-81d5-4487-b5a2-07917afc796f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Feb  2 12:45:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb  2 12:45:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb  2 12:45:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.184 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.185 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.207 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.290 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.290 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.297 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.298 239853 INFO nova.compute.claims [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.446 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb  2 12:45:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb  2 12:45:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.561 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Successfully updated port: 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.584 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.585 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquired lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.585 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:45:12 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:12.598 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.733 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 4af1978a-81d5-4487-b5a2-07917afc796f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.757 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.785 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] resizing rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.857 239853 DEBUG nova.objects.instance [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lazy-loading 'migration_context' on Instance uuid 4af1978a-81d5-4487-b5a2-07917afc796f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.877 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.878 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Ensure instance console log exists: /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.879 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.879 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.879 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668937519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:12 np0005605476 nova_compute[239846]: 2026-02-02 17:45:12.995 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.001 239853 DEBUG nova.compute.provider_tree [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.053 239853 DEBUG nova.compute.manager [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-changed-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.053 239853 DEBUG nova.compute.manager [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Refreshing instance network info cache due to event network-changed-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.054 239853 DEBUG oslo_concurrency.lockutils [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.056 239853 ERROR nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [req-4e52b8e9-9f11-44c2-9b70-d5afef9a7772] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID a0b0d175-0948-46db-92ba-608ef43a689f.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-4e52b8e9-9f11-44c2-9b70-d5afef9a7772"}]}#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.072 239853 DEBUG nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.092 239853 DEBUG nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.092 239853 DEBUG nova.compute.provider_tree [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:45:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/214548096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.123 239853 DEBUG nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.148 239853 DEBUG nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.193 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.3 KiB/s wr, 31 op/s
Feb  2 12:45:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966683984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.754 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.760 239853 DEBUG nova.compute.provider_tree [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.870 239853 DEBUG nova.scheduler.client.report [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updated inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.871 239853 DEBUG nova.compute.provider_tree [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating resource provider a0b0d175-0948-46db-92ba-608ef43a689f generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.871 239853 DEBUG nova.compute.provider_tree [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.936 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:13 np0005605476 nova_compute[239846]: 2026-02-02 17:45:13.937 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.100 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.100 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.105 239853 DEBUG nova.network.neutron [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Updating instance_info_cache with network_info: [{"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.181 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Releasing lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.181 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Instance network_info: |[{"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.181 239853 DEBUG oslo_concurrency.lockutils [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.182 239853 DEBUG nova.network.neutron [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Refreshing network info cache for port 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.185 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Start _get_guest_xml network_info=[{"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.190 239853 WARNING nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.195 239853 DEBUG nova.virt.libvirt.host [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.195 239853 DEBUG nova.virt.libvirt.host [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.202 239853 DEBUG nova.virt.libvirt.host [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.202 239853 DEBUG nova.virt.libvirt.host [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.203 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.203 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.203 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.204 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.205 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.205 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.205 239853 DEBUG nova.virt.hardware [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.208 239853 DEBUG nova.privsep.utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.209 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.227 239853 INFO nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.271 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.476 239853 DEBUG nova.policy [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '067cb133f5004edda930844c63f37aad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54713476150d4f62beed2a2d89131f2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.541 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.543 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.543 239853 INFO nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Creating image(s)#033[00m
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.570 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.609 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.637 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.642 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.698 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.699 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.700 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.700 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.724 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.730 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 1839664c-7601-4228-8383-be2631448879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877701659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.776 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.801 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:14 np0005605476 nova_compute[239846]: 2026-02-02 17:45:14.806 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb  2 12:45:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb  2 12:45:14 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.284 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 1839664c-7601-4228-8383-be2631448879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 73 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.2 MiB/s wr, 83 op/s
Feb  2 12:45:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/501890177' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.361 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] resizing rbd image 1839664c-7601-4228-8383-be2631448879_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.389 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.390 239853 DEBUG nova.virt.libvirt.vif [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1377744495',display_name='tempest-VolumesActionsTest-instance-1377744495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1377744495',id=1,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28896be470ca44d887bb24e9da819ee1',ramdisk_id='',reservation_id='r-5empzgju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1730916975',owner_user_name='tempest-VolumesActionsTest-1730916975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:09Z,user_data=None,user_id='f2b1366a8ee34a0e9437bb253f37a284',uuid=4af1978a-81d5-4487-b5a2-07917afc796f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.391 239853 DEBUG nova.network.os_vif_util [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converting VIF {"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.392 239853 DEBUG nova.network.os_vif_util [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.395 239853 DEBUG nova.objects.instance [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4af1978a-81d5-4487-b5a2-07917afc796f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.418 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <uuid>4af1978a-81d5-4487-b5a2-07917afc796f</uuid>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <name>instance-00000001</name>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesActionsTest-instance-1377744495</nova:name>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:45:14</nova:creationTime>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:user uuid="f2b1366a8ee34a0e9437bb253f37a284">tempest-VolumesActionsTest-1730916975-project-member</nova:user>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:project uuid="28896be470ca44d887bb24e9da819ee1">tempest-VolumesActionsTest-1730916975</nova:project>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <nova:port uuid="5a6fc3d8-4a79-4675-8e70-3199ef6a61e3">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="serial">4af1978a-81d5-4487-b5a2-07917afc796f</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="uuid">4af1978a-81d5-4487-b5a2-07917afc796f</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/4af1978a-81d5-4487-b5a2-07917afc796f_disk">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/4af1978a-81d5-4487-b5a2-07917afc796f_disk.config">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:7b:ee:2a"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <target dev="tap5a6fc3d8-4a"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/console.log" append="off"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:45:15 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:45:15 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:45:15 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:45:15 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.419 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Preparing to wait for external event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.419 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.420 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.420 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.421 239853 DEBUG nova.virt.libvirt.vif [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1377744495',display_name='tempest-VolumesActionsTest-instance-1377744495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1377744495',id=1,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28896be470ca44d887bb24e9da819ee1',ramdisk_id='',reservation_id='r-5empzgju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1730916975',owner_user_name='tempest-VolumesActionsTest-1730916975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:09Z,user_data=None,user_id='f2b1366a8ee34a0e9437bb253f37a284',uuid=4af1978a-81d5-4487-b5a2-07917afc796f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.421 239853 DEBUG nova.network.os_vif_util [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converting VIF {"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.421 239853 DEBUG nova.network.os_vif_util [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.422 239853 DEBUG os_vif [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.481 239853 DEBUG ovsdbapp.backend.ovs_idl [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.482 239853 DEBUG ovsdbapp.backend.ovs_idl [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.482 239853 DEBUG ovsdbapp.backend.ovs_idl [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.483 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.483 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [POLLOUT] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.484 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.484 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.488 239853 DEBUG nova.objects.instance [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'migration_context' on Instance uuid 1839664c-7601-4228-8383-be2631448879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.490 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.492 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.501 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.501 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.501 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.503 239853 INFO oslo.privsep.daemon [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp70ca1rbh/privsep.sock']#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.520 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.521 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Ensure instance console log exists: /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.522 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.522 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:15 np0005605476 nova_compute[239846]: 2026-02-02 17:45:15.523 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.189 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Successfully created port: 297dd7c7-e452-4cca-a536-0b1f09789489 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.194 239853 INFO oslo.privsep.daemon [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.019 246527 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.023 246527 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.024 246527 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.025 246527 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246527#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.517 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.517 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a6fc3d8-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.518 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a6fc3d8-4a, col_values=(('external_ids', {'iface-id': '5a6fc3d8-4a79-4675-8e70-3199ef6a61e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:ee:2a', 'vm-uuid': '4af1978a-81d5-4487-b5a2-07917afc796f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.519 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:16 np0005605476 NetworkManager[49022]: <info>  [1770054316.5207] manager: (tap5a6fc3d8-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.523 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.525 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.525 239853 INFO os_vif [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a')#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.568 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.569 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.569 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] No VIF found with MAC fa:16:3e:7b:ee:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.569 239853 INFO nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Using config drive#033[00m
Feb  2 12:45:16 np0005605476 nova_compute[239846]: 2026-02-02 17:45:16.584 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb  2 12:45:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb  2 12:45:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.152 239853 DEBUG nova.network.neutron [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Updated VIF entry in instance network info cache for port 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.153 239853 DEBUG nova.network.neutron [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Updating instance_info_cache with network_info: [{"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.192 239853 DEBUG oslo_concurrency.lockutils [req-02b40e6f-6463-4c7d-a848-518ebdb9b9bb req-8a3f36dd-c294-45ec-b3e4-905b6c88c26a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-4af1978a-81d5-4487-b5a2-07917afc796f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.301 239853 INFO nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Creating config drive at /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.307 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp88nivksa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 103 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.5 MiB/s wr, 131 op/s
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2100305096' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2100305096' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.435 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp88nivksa" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.467 239853 DEBUG nova.storage.rbd_utils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] rbd image 4af1978a-81d5-4487-b5a2-07917afc796f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.471 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config 4af1978a-81d5-4487-b5a2-07917afc796f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.654 239853 DEBUG oslo_concurrency.processutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config 4af1978a-81d5-4487-b5a2-07917afc796f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.655 239853 INFO nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Deleting local config drive /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f/disk.config because it was imported into RBD.#033[00m
Feb  2 12:45:17 np0005605476 systemd[1]: Starting libvirt secret daemon...
Feb  2 12:45:17 np0005605476 systemd[1]: Started libvirt secret daemon.
Feb  2 12:45:17 np0005605476 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb  2 12:45:17 np0005605476 kernel: tap5a6fc3d8-4a: entered promiscuous mode
Feb  2 12:45:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:17Z|00027|binding|INFO|Claiming lport 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 for this chassis.
Feb  2 12:45:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:17Z|00028|binding|INFO|5a6fc3d8-4a79-4675-8e70-3199ef6a61e3: Claiming fa:16:3e:7b:ee:2a 10.100.0.4
Feb  2 12:45:17 np0005605476 NetworkManager[49022]: <info>  [1770054317.7572] manager: (tap5a6fc3d8-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.756 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.759 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:17 np0005605476 systemd-udevd[246624]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:45:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:17Z|00029|binding|INFO|Setting lport 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 ovn-installed in OVS
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.801 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.802 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:17 np0005605476 nova_compute[239846]: 2026-02-02 17:45:17.806 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:17 np0005605476 NetworkManager[49022]: <info>  [1770054317.8124] device (tap5a6fc3d8-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:45:17 np0005605476 NetworkManager[49022]: <info>  [1770054317.8133] device (tap5a6fc3d8-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:45:17 np0005605476 systemd-machined[208080]: New machine qemu-1-instance-00000001.
Feb  2 12:45:17 np0005605476 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb  2 12:45:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb  2 12:45:18 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:18Z|00030|binding|INFO|Setting lport 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 up in Southbound
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.014 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ee:2a 10.100.0.4'], port_security=['fa:16:3e:7b:ee:2a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4af1978a-81d5-4487-b5a2-07917afc796f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28896be470ca44d887bb24e9da819ee1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01e82fcf-c326-4345-a87f-a3e7a709dc13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b072f30-6c16-4a55-8964-a24e67279145, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.015 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 in datapath 691da22e-0a6a-44ed-b98e-b631dbd59fb2 bound to our chassis#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.017 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691da22e-0a6a-44ed-b98e-b631dbd59fb2#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.018 155391 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpx1bw3qnr/privsep.sock']#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.070 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Successfully updated port: 297dd7c7-e452-4cca-a536-0b1f09789489 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.480 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054318.4800336, 4af1978a-81d5-4487-b5a2-07917afc796f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.481 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] VM Started (Lifecycle Event)#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.518 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.519 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquired lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.519 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.555 239853 DEBUG nova.compute.manager [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-changed-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.556 239853 DEBUG nova.compute.manager [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Refreshing instance network info cache due to event network-changed-297dd7c7-e452-4cca-a536-0b1f09789489. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.556 239853 DEBUG oslo_concurrency.lockutils [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.559 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.562 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054318.4814854, 4af1978a-81d5-4487-b5a2-07917afc796f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.562 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.586 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.588 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.605 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.660 155391 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.660 155391 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx1bw3qnr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.547 246686 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.550 246686 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.552 246686 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.553 246686 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246686#033[00m
Feb  2 12:45:18 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:18.662 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[41a09e6d-2e93-4993-bb41-ca8d82a623ff]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:18 np0005605476 nova_compute[239846]: 2026-02-02 17:45:18.912 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:45:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 125 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.2 MiB/s wr, 138 op/s
Feb  2 12:45:19 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:19.383 246686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:19 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:19.383 246686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:19 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:19.383 246686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:19 np0005605476 nova_compute[239846]: 2026-02-02 17:45:19.448 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb  2 12:45:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb  2 12:45:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.084 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e63096d7-22b8-4aed-bc4e-b486a55ba3b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.085 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691da22e-01 in ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.086 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691da22e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.086 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cb850057-8265-4397-b76e-0c064def2180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.089 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[024ae21d-57d7-486f-93aa-a72bcc6ae918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.105 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcc0fd5-7dce-4416-be67-625b848087ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.123 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[08f3802d-6be4-495b-9d00-91bc9673deb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.125 155391 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp037046cp/privsep.sock']#033[00m
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3735388188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3735388188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.219 239853 DEBUG nova.network.neutron [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Updating instance_info_cache with network_info: [{"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.245 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Releasing lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.246 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Instance network_info: |[{"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.246 239853 DEBUG oslo_concurrency.lockutils [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.246 239853 DEBUG nova.network.neutron [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Refreshing network info cache for port 297dd7c7-e452-4cca-a536-0b1f09789489 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.251 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Start _get_guest_xml network_info=[{"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.255 239853 WARNING nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.269 239853 DEBUG nova.virt.libvirt.host [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.270 239853 DEBUG nova.virt.libvirt.host [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.275 239853 DEBUG nova.virt.libvirt.host [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.276 239853 DEBUG nova.virt.libvirt.host [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.276 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.277 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.277 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.277 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.277 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.277 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.278 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.278 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.278 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.278 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.278 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.279 239853 DEBUG nova.virt.hardware [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.281 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284409937' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.774 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.788 155391 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.790 155391 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp037046cp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.632 246720 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.791 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.636 246720 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.638 246720 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.638 246720 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246720#033[00m
Feb  2 12:45:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:20.794 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9d7ce8-6645-4416-bbab-43487ecd6e5a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:20 np0005605476 nova_compute[239846]: 2026-02-02 17:45:20.795 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.199 239853 DEBUG nova.compute.manager [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.200 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.200 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.201 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.201 239853 DEBUG nova.compute.manager [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Processing event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.202 239853 DEBUG nova.compute.manager [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.202 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.203 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.203 239853 DEBUG oslo_concurrency.lockutils [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.204 239853 DEBUG nova.compute.manager [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] No waiting events found dispatching network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.204 239853 WARNING nova.compute.manager [req-2caea8ca-2cff-4d43-be44-5001901767c4 req-f80d5c43-3621-4b53-a56b-448976601007 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received unexpected event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.206 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.224 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.226 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054321.225586, 4af1978a-81d5-4487-b5a2-07917afc796f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.226 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.231 239853 INFO nova.virt.libvirt.driver [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Instance spawned successfully.#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.231 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.233 246720 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.233 246720 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.233 246720 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.252 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.258 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.263 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.263 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.264 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.264 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.265 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.266 239853 DEBUG nova.virt.libvirt.driver [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.277 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:45:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752314808' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.301 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.302 239853 DEBUG nova.virt.libvirt.vif [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-177134479',display_name='tempest-VolumesActionsTest-instance-177134479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-177134479',id=2,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-n55z8hap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:14Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=1839664c-7601-4228-8383-be2631448879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.303 239853 DEBUG nova.network.os_vif_util [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.304 239853 DEBUG nova.network.os_vif_util [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.304 239853 DEBUG nova.objects.instance [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1839664c-7601-4228-8383-be2631448879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.319 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <uuid>1839664c-7601-4228-8383-be2631448879</uuid>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <name>instance-00000002</name>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesActionsTest-instance-177134479</nova:name>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:45:20</nova:creationTime>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:user uuid="067cb133f5004edda930844c63f37aad">tempest-VolumesActionsTest-1170802853-project-member</nova:user>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:project uuid="54713476150d4f62beed2a2d89131f2b">tempest-VolumesActionsTest-1170802853</nova:project>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <nova:port uuid="297dd7c7-e452-4cca-a536-0b1f09789489">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="serial">1839664c-7601-4228-8383-be2631448879</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="uuid">1839664c-7601-4228-8383-be2631448879</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/1839664c-7601-4228-8383-be2631448879_disk">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/1839664c-7601-4228-8383-be2631448879_disk.config">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:38:61:fb"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <target dev="tap297dd7c7-e4"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/console.log" append="off"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:45:21 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:45:21 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:45:21 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:45:21 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:45:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 134 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 4.3 MiB/s wr, 241 op/s
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.326 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Preparing to wait for external event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.326 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.327 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.327 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.328 239853 DEBUG nova.virt.libvirt.vif [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-177134479',display_name='tempest-VolumesActionsTest-instance-177134479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-177134479',id=2,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-n55z8hap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:14Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=1839664c-7601-4228-8383-be2631448879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.328 239853 DEBUG nova.network.os_vif_util [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.329 239853 DEBUG nova.network.os_vif_util [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.329 239853 DEBUG os_vif [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.331 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.331 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.332 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.333 239853 INFO nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Took 12.17 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.334 239853 DEBUG nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.337 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.338 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap297dd7c7-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.338 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap297dd7c7-e4, col_values=(('external_ids', {'iface-id': '297dd7c7-e452-4cca-a536-0b1f09789489', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:61:fb', 'vm-uuid': '1839664c-7601-4228-8383-be2631448879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.340 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:21 np0005605476 NetworkManager[49022]: <info>  [1770054321.3412] manager: (tap297dd7c7-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.341 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.344 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.346 239853 INFO os_vif [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4')#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.402 239853 INFO nova.compute.manager [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Took 13.22 seconds to build instance.#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.409 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.409 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.409 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No VIF found with MAC fa:16:3e:38:61:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.410 239853 INFO nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Using config drive#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.432 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.439 239853 DEBUG oslo_concurrency.lockutils [None req-f8e506fa-93c4-4bbd-88d0-2232c02ed688 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.791 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[c2159328-5d35-4827-8704-8c7887c1c506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.805 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[06cdd1ee-7002-4d8b-ac9c-a76d5420926b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 NetworkManager[49022]: <info>  [1770054321.8068] manager: (tap691da22e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.818 239853 INFO nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Creating config drive at /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.823 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzy4eyu9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:21 np0005605476 systemd-udevd[246796]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.830 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[ba26046c-a5c5-466f-9860-bdb4b38e9848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.832 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f9327d12-ac56-484e-8ba3-c83d73c1f75c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 NetworkManager[49022]: <info>  [1770054321.8546] device (tap691da22e-00): carrier: link connected
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.857 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9d8ff7-a28f-421c-a0bc-b7570413fd77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.870 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f9678b0c-7509-46e9-8b60-fc669b7f4720]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691da22e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:e7:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356318, 'reachable_time': 22129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246819, 'error': None, 'target': 'ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.883 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69a60b47-8c1a-4c68-be45-4a703cc75039]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:e71f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 356318, 'tstamp': 356318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246820, 'error': None, 'target': 'ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.894 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[989ba6ac-c498-4365-8995-52ad4d8a5f3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691da22e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:e7:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356318, 'reachable_time': 22129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246821, 'error': None, 'target': 'ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.918 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1becea-7c9e-4781-a855-b378b98c78b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.942 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzy4eyu9v" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.963 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8d1ae6-69a4-4d31-a09e-049685700a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.966 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691da22e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:21 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.966 239853 DEBUG nova.storage.rbd_utils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 1839664c-7601-4228-8383-be2631448879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.966 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:21.967 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691da22e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:21.970 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config 1839664c-7601-4228-8383-be2631448879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.0124] manager: (tap691da22e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb  2 12:45:22 np0005605476 kernel: tap691da22e-00: entered promiscuous mode
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.016 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691da22e-00, col_values=(('external_ids', {'iface-id': '00c67835-795a-45ee-b380-a5e7b0a4d319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00031|binding|INFO|Releasing lport 00c67835-795a-45ee-b380-a5e7b0a4d319 from this chassis (sb_readonly=0)
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.021 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.026 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691da22e-0a6a-44ed-b98e-b631dbd59fb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691da22e-0a6a-44ed-b98e-b631dbd59fb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.027 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a69dbf1e-5f5e-4a45-a9cf-ac4e3f9c5fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.029 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-691da22e-0a6a-44ed-b98e-b631dbd59fb2
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/691da22e-0a6a-44ed-b98e-b631dbd59fb2.pid.haproxy
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 691da22e-0a6a-44ed-b98e-b631dbd59fb2
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.030 239853 DEBUG nova.network.neutron [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Updated VIF entry in instance network info cache for port 297dd7c7-e452-4cca-a536-0b1f09789489. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.030 239853 DEBUG nova.network.neutron [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Updating instance_info_cache with network_info: [{"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.031 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'env', 'PROCESS_TAG=haproxy-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691da22e-0a6a-44ed-b98e-b631dbd59fb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.042 239853 DEBUG oslo_concurrency.lockutils [req-808794aa-1da2-4f69-9f3c-6c68499521e5 req-6521c0f0-0ebe-4f85-a026-d582a79ab157 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-1839664c-7601-4228-8383-be2631448879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.120 239853 DEBUG oslo_concurrency.processutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config 1839664c-7601-4228-8383-be2631448879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.120 239853 INFO nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Deleting local config drive /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879/disk.config because it was imported into RBD.#033[00m
Feb  2 12:45:22 np0005605476 kernel: tap297dd7c7-e4: entered promiscuous mode
Feb  2 12:45:22 np0005605476 systemd-udevd[246813]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.1499] manager: (tap297dd7c7-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00032|binding|INFO|Claiming lport 297dd7c7-e452-4cca-a536-0b1f09789489 for this chassis.
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00033|binding|INFO|297dd7c7-e452-4cca-a536-0b1f09789489: Claiming fa:16:3e:38:61:fb 10.100.0.7
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.153 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.1582] device (tap297dd7c7-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.158 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.1602] device (tap297dd7c7-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.163 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:61:fb 10.100.0.7'], port_security=['fa:16:3e:38:61:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1839664c-7601-4228-8383-be2631448879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=297dd7c7-e452-4cca-a536-0b1f09789489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:22 np0005605476 systemd-machined[208080]: New machine qemu-2-instance-00000002.
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.181 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00034|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 ovn-installed in OVS
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00035|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 up in Southbound
Feb  2 12:45:22 np0005605476 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.186 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 podman[246912]: 2026-02-02 17:45:22.420905498 +0000 UTC m=+0.056844914 container create b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:45:22 np0005605476 systemd[1]: Started libpod-conmon-b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee.scope.
Feb  2 12:45:22 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:45:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885597d3bae1c6bf720135c9d69b54d853cd75f71434da979c6f665f67de0e21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:45:22 np0005605476 podman[246912]: 2026-02-02 17:45:22.47882281 +0000 UTC m=+0.114762246 container init b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:45:22 np0005605476 podman[246912]: 2026-02-02 17:45:22.483736547 +0000 UTC m=+0.119675963 container start b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:45:22 np0005605476 podman[246912]: 2026-02-02 17:45:22.391411927 +0000 UTC m=+0.027351363 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:45:22 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [NOTICE]   (246931) : New worker (246947) forked
Feb  2 12:45:22 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [NOTICE]   (246931) : Loading success.
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.543 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 297dd7c7-e452-4cca-a536-0b1f09789489 in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d unbound from our chassis#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.545 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.555 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc97adeb-235c-4ab0-ae2e-87e883f946bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.556 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0e2bcc8-d1 in ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.558 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0e2bcc8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.558 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[523b66e8-2d93-40ad-bb20-5ca947c548cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.559 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e87bb27-571f-4fe0-9c21-8c8a3afdfe77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.577 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[4589340e-03ad-471f-87ee-86fb4f9941a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.591 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[35580ad6-7f04-436a-a3a3-54e6333d8bbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.612 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2516a1-e961-485e-93f4-b18b7e0c071c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.618 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0699e03e-7a67-46f5-844e-ec93c2dda109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.6195] manager: (tapb0e2bcc8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.646 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8b775a-6ef2-46ec-b41f-67d06db7fcc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.650 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3e34fbbc-83f1-412c-8032-e7a878fe7446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.664 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054322.6632812, 1839664c-7601-4228-8383-be2631448879 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.665 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] VM Started (Lifecycle Event)#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.6702] device (tapb0e2bcc8-d0): carrier: link connected
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.675 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1e42d61a-4758-40c8-abb2-39ec63248d81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.689 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.690 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fee5c765-3cf6-4b60-ae23-35219eaca790]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0e2bcc8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:3d:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356400, 'reachable_time': 21130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246995, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.696 239853 DEBUG nova.compute.manager [req-ba05c564-8a0a-4b59-a9cf-69c742335ff1 req-6e6aa00f-a133-45cf-9032-bbf6e5ed738a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.696 239853 DEBUG oslo_concurrency.lockutils [req-ba05c564-8a0a-4b59-a9cf-69c742335ff1 req-6e6aa00f-a133-45cf-9032-bbf6e5ed738a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.696 239853 DEBUG oslo_concurrency.lockutils [req-ba05c564-8a0a-4b59-a9cf-69c742335ff1 req-6e6aa00f-a133-45cf-9032-bbf6e5ed738a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.697 239853 DEBUG oslo_concurrency.lockutils [req-ba05c564-8a0a-4b59-a9cf-69c742335ff1 req-6e6aa00f-a133-45cf-9032-bbf6e5ed738a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.697 239853 DEBUG nova.compute.manager [req-ba05c564-8a0a-4b59-a9cf-69c742335ff1 req-6e6aa00f-a133-45cf-9032-bbf6e5ed738a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Processing event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.698 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054322.6637535, 1839664c-7601-4228-8383-be2631448879 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.698 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.700 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.703 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.706 239853 INFO nova.virt.libvirt.driver [-] [instance: 1839664c-7601-4228-8383-be2631448879] Instance spawned successfully.#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.705 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[361a7b30-1a8a-436f-a5a0-fec2391200a4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:3de2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 356400, 'tstamp': 356400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246996, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.706 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.718 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5641d802-0469-412d-96a0-8e43050c8729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0e2bcc8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:3d:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356400, 'reachable_time': 21130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246997, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.723 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.729 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054322.7046626, 1839664c-7601-4228-8383-be2631448879 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.729 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.733 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.734 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.735 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.735 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.736 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.736 239853 DEBUG nova.virt.libvirt.driver [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.742 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[724a4c80-d75c-42fc-a58b-ca8e5e0b7855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.752 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.755 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.797 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a3768f-91c9-4a86-816a-686b055842b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.798 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0e2bcc8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.799 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.800 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0e2bcc8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.801 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:45:22 np0005605476 NetworkManager[49022]: <info>  [1770054322.8024] manager: (tapb0e2bcc8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb  2 12:45:22 np0005605476 kernel: tapb0e2bcc8-d0: entered promiscuous mode
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.806 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0e2bcc8-d0, col_values=(('external_ids', {'iface-id': '8321b436-a113-451e-be67-58eea0929a06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:22Z|00036|binding|INFO|Releasing lport 8321b436-a113-451e-be67-58eea0929a06 from this chassis (sb_readonly=0)
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.801 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.818 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.818 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.820 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69b9fc3f-ad13-41af-b9be-4c0abe5c5cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.821 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:45:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:22.823 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'env', 'PROCESS_TAG=haproxy-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.827 239853 INFO nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Took 8.29 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.828 239853 DEBUG nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.901 239853 INFO nova.compute.manager [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Took 10.63 seconds to build instance.#033[00m
Feb  2 12:45:22 np0005605476 nova_compute[239846]: 2026-02-02 17:45:22.919 239853 DEBUG oslo_concurrency.lockutils [None req-062757e3-4dd8-447a-9b7e-9fc2753bc641 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:23 np0005605476 podman[247029]: 2026-02-02 17:45:23.193505233 +0000 UTC m=+0.078269490 container create 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:45:23 np0005605476 podman[247029]: 2026-02-02 17:45:23.143364757 +0000 UTC m=+0.028129034 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:45:23 np0005605476 systemd[1]: Started libpod-conmon-6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78.scope.
Feb  2 12:45:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:45:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c875c42c1ea0c347509e09d6b20205b975bc4b08d025f76a2dcf512da05fba37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:45:23 np0005605476 podman[247029]: 2026-02-02 17:45:23.276312518 +0000 UTC m=+0.161076805 container init 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:45:23 np0005605476 podman[247029]: 2026-02-02 17:45:23.284360202 +0000 UTC m=+0.169124459 container start 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:45:23 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [NOTICE]   (247048) : New worker (247050) forked
Feb  2 12:45:23 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [NOTICE]   (247048) : Loading success.
Feb  2 12:45:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 134 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 2.5 MiB/s wr, 179 op/s
Feb  2 12:45:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3943904761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb  2 12:45:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb  2 12:45:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.451 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.839 239853 DEBUG nova.compute.manager [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.839 239853 DEBUG oslo_concurrency.lockutils [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.839 239853 DEBUG oslo_concurrency.lockutils [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.840 239853 DEBUG oslo_concurrency.lockutils [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.840 239853 DEBUG nova.compute.manager [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.840 239853 WARNING nova.compute.manager [req-81e81d12-7250-48d4-b8b8-36ff41772470 req-682f4718-de2a-40b0-97ef-b5e69c62c662 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:45:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb  2 12:45:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb  2 12:45:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.885 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.886 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.886 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.886 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.886 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.887 239853 INFO nova.compute.manager [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Terminating instance#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.890 239853 DEBUG nova.compute.manager [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:45:24 np0005605476 kernel: tap297dd7c7-e4 (unregistering): left promiscuous mode
Feb  2 12:45:24 np0005605476 NetworkManager[49022]: <info>  [1770054324.9226] device (tap297dd7c7-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:45:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:24Z|00037|binding|INFO|Releasing lport 297dd7c7-e452-4cca-a536-0b1f09789489 from this chassis (sb_readonly=0)
Feb  2 12:45:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:24Z|00038|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 down in Southbound
Feb  2 12:45:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:24Z|00039|binding|INFO|Removing iface tap297dd7c7-e4 ovn-installed in OVS
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.932 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:24 np0005605476 nova_compute[239846]: 2026-02-02 17:45:24.944 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:24.954 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:61:fb 10.100.0.7'], port_security=['fa:16:3e:38:61:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1839664c-7601-4228-8383-be2631448879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=297dd7c7-e452-4cca-a536-0b1f09789489) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:24.956 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 297dd7c7-e452-4cca-a536-0b1f09789489 in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d unbound from our chassis#033[00m
Feb  2 12:45:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:24.957 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:45:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:24.958 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8caf04f9-6384-4180-a3f9-d342348b6034]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:24.958 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d namespace which is not needed anymore#033[00m
Feb  2 12:45:24 np0005605476 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb  2 12:45:24 np0005605476 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 2.617s CPU time.
Feb  2 12:45:24 np0005605476 systemd-machined[208080]: Machine qemu-2-instance-00000002 terminated.
Feb  2 12:45:25 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [NOTICE]   (247048) : haproxy version is 2.8.14-c23fe91
Feb  2 12:45:25 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [NOTICE]   (247048) : path to executable is /usr/sbin/haproxy
Feb  2 12:45:25 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [WARNING]  (247048) : Exiting Master process...
Feb  2 12:45:25 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [ALERT]    (247048) : Current worker (247050) exited with code 143 (Terminated)
Feb  2 12:45:25 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247044]: [WARNING]  (247048) : All workers exited. Exiting... (0)
Feb  2 12:45:25 np0005605476 systemd[1]: libpod-6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78.scope: Deactivated successfully.
Feb  2 12:45:25 np0005605476 podman[247083]: 2026-02-02 17:45:25.0752503 +0000 UTC m=+0.040766826 container died 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:45:25 np0005605476 kernel: tap297dd7c7-e4: entered promiscuous mode
Feb  2 12:45:25 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78-userdata-shm.mount: Deactivated successfully.
Feb  2 12:45:25 np0005605476 NetworkManager[49022]: <info>  [1770054325.1074] manager: (tap297dd7c7-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Feb  2 12:45:25 np0005605476 systemd-udevd[247062]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.108 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00040|binding|INFO|Claiming lport 297dd7c7-e452-4cca-a536-0b1f09789489 for this chassis.
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00041|binding|INFO|297dd7c7-e452-4cca-a536-0b1f09789489: Claiming fa:16:3e:38:61:fb 10.100.0.7
Feb  2 12:45:25 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c875c42c1ea0c347509e09d6b20205b975bc4b08d025f76a2dcf512da05fba37-merged.mount: Deactivated successfully.
Feb  2 12:45:25 np0005605476 kernel: tap297dd7c7-e4 (unregistering): left promiscuous mode
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.116 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:61:fb 10.100.0.7'], port_security=['fa:16:3e:38:61:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1839664c-7601-4228-8383-be2631448879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=297dd7c7-e452-4cca-a536-0b1f09789489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00042|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 ovn-installed in OVS
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00043|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 up in Southbound
Feb  2 12:45:25 np0005605476 podman[247083]: 2026-02-02 17:45:25.128341657 +0000 UTC m=+0.093858183 container cleanup 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00044|binding|INFO|Releasing lport 297dd7c7-e452-4cca-a536-0b1f09789489 from this chassis (sb_readonly=1)
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00045|if_status|INFO|Not setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 down as sb is readonly
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00046|binding|INFO|Removing iface tap297dd7c7-e4 ovn-installed in OVS
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.127 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 systemd[1]: libpod-conmon-6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78.scope: Deactivated successfully.
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00047|binding|INFO|Releasing lport 297dd7c7-e452-4cca-a536-0b1f09789489 from this chassis (sb_readonly=0)
Feb  2 12:45:25 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:25Z|00048|binding|INFO|Setting lport 297dd7c7-e452-4cca-a536-0b1f09789489 down in Southbound
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.140 239853 INFO nova.virt.libvirt.driver [-] [instance: 1839664c-7601-4228-8383-be2631448879] Instance destroyed successfully.#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.140 239853 DEBUG nova.objects.instance [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'resources' on Instance uuid 1839664c-7601-4228-8383-be2631448879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.142 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.150 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:61:fb 10.100.0.7'], port_security=['fa:16:3e:38:61:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1839664c-7601-4228-8383-be2631448879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=297dd7c7-e452-4cca-a536-0b1f09789489) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.169 239853 DEBUG nova.virt.libvirt.vif [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-177134479',display_name='tempest-VolumesActionsTest-instance-177134479',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-177134479',id=2,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:45:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-n55z8hap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:45:22Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=1839664c-7601-4228-8383-be2631448879,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.170 239853 DEBUG nova.network.os_vif_util [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "297dd7c7-e452-4cca-a536-0b1f09789489", "address": "fa:16:3e:38:61:fb", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap297dd7c7-e4", "ovs_interfaceid": "297dd7c7-e452-4cca-a536-0b1f09789489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.171 239853 DEBUG nova.network.os_vif_util [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.171 239853 DEBUG os_vif [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.172 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.173 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297dd7c7-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.175 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.178 239853 INFO os_vif [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:61:fb,bridge_name='br-int',has_traffic_filtering=True,id=297dd7c7-e452-4cca-a536-0b1f09789489,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap297dd7c7-e4')#033[00m
Feb  2 12:45:25 np0005605476 podman[247117]: 2026-02-02 17:45:25.21249523 +0000 UTC m=+0.063261182 container remove 6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.218 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[22310065-552f-415c-aab2-734ecf72d879]: (4, ('Mon Feb  2 05:45:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d (6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78)\n6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78\nMon Feb  2 05:45:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d (6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78)\n6667864f9147dfc6f5ef305af3de2a83a86b97a1d69bc4abd275fd174941ea78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.220 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7427ab16-03b0-4163-a738-246b6fe8d533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.221 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0e2bcc8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.223 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 kernel: tapb0e2bcc8-d0: left promiscuous mode
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.225 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.227 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b14441dd-8be2-4b5d-8ca8-52170b719a22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.235 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.244 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[185e193b-4951-4887-8557-4b315ea5eb65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.245 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f12a76b9-5482-46b5-89fb-eb2cfcba5f6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.264 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5f5f7e-1224-4043-9b46-951c56120ee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356394, 'reachable_time': 28947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247149, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 systemd[1]: run-netns-ovnmeta\x2db0e2bcc8\x2ddbb4\x2d4b4e\x2dab42\x2d3a2e23a9d08d.mount: Deactivated successfully.
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.276 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.276 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[6da4bbec-e923-47cf-9037-1cc826d8ae33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.277 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 297dd7c7-e452-4cca-a536-0b1f09789489 in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d unbound from our chassis#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.279 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.279 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3839cb7e-5316-4e59-8f88-d07e50cbda66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.280 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 297dd7c7-e452-4cca-a536-0b1f09789489 in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d unbound from our chassis#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.281 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:45:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:25.281 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3596921f-ac78-4598-b398-78c83dda817b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 134 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.0 MiB/s wr, 357 op/s
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.467 239853 INFO nova.virt.libvirt.driver [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Deleting instance files /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879_del#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.468 239853 INFO nova.virt.libvirt.driver [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Deletion of /var/lib/nova/instances/1839664c-7601-4228-8383-be2631448879_del complete#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.543 239853 DEBUG nova.virt.libvirt.host [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.544 239853 INFO nova.virt.libvirt.host [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] UEFI support detected#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.546 239853 INFO nova.compute.manager [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.546 239853 DEBUG oslo.service.loopingcall [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.546 239853 DEBUG nova.compute.manager [-] [instance: 1839664c-7601-4228-8383-be2631448879] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:45:25 np0005605476 nova_compute[239846]: 2026-02-02 17:45:25.546 239853 DEBUG nova.network.neutron [-] [instance: 1839664c-7601-4228-8383-be2631448879] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.306 239853 DEBUG nova.network.neutron [-] [instance: 1839664c-7601-4228-8383-be2631448879] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.336 239853 INFO nova.compute.manager [-] [instance: 1839664c-7601-4228-8383-be2631448879] Took 0.79 seconds to deallocate network for instance.#033[00m
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171052302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171052302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.390 239853 DEBUG nova.compute.manager [req-76e8f9f9-968d-40d7-a25f-1dfc7eaad787 req-6a16ed2a-72cf-4fb4-8cca-cafd536f76b2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-deleted-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.404 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.405 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.467 239853 DEBUG oslo_concurrency.processutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.955 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.955 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.956 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.956 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.956 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.956 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.957 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.958 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.958 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.959 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.959 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.960 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.960 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.960 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.961 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.961 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.961 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.961 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.962 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.962 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444847128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.962 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.962 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.962 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.963 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.963 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.963 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.963 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.964 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.964 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.964 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-unplugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.964 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.965 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "1839664c-7601-4228-8383-be2631448879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.965 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.965 239853 DEBUG oslo_concurrency.lockutils [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "1839664c-7601-4228-8383-be2631448879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.965 239853 DEBUG nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] No waiting events found dispatching network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.966 239853 WARNING nova.compute.manager [req-105564cf-f4e2-4362-8c05-40a144e8c24b req-fb731e71-7e60-46fb-b12e-20a47319564b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 1839664c-7601-4228-8383-be2631448879] Received unexpected event network-vif-plugged-297dd7c7-e452-4cca-a536-0b1f09789489 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.981 239853 DEBUG oslo_concurrency.processutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:26 np0005605476 nova_compute[239846]: 2026-02-02 17:45:26.989 239853 DEBUG nova.compute.provider_tree [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.011 239853 DEBUG nova.scheduler.client.report [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.035 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.071 239853 INFO nova.scheduler.client.report [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Deleted allocations for instance 1839664c-7601-4228-8383-be2631448879#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.159 239853 DEBUG oslo_concurrency.lockutils [None req-346540d3-4be9-47cc-8ef6-173cf857387c 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1839664c-7601-4228-8383-be2631448879" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.257 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.257 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.258 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.258 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.258 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.259 239853 INFO nova.compute.manager [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Terminating instance#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.260 239853 DEBUG nova.compute.manager [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:45:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 118 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 28 KiB/s wr, 315 op/s
Feb  2 12:45:27 np0005605476 kernel: tap5a6fc3d8-4a (unregistering): left promiscuous mode
Feb  2 12:45:27 np0005605476 NetworkManager[49022]: <info>  [1770054327.7074] device (tap5a6fc3d8-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.741 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:27Z|00049|binding|INFO|Releasing lport 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 from this chassis (sb_readonly=0)
Feb  2 12:45:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:27Z|00050|binding|INFO|Setting lport 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 down in Southbound
Feb  2 12:45:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:27Z|00051|binding|INFO|Removing iface tap5a6fc3d8-4a ovn-installed in OVS
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.743 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.748 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.749 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ee:2a 10.100.0.4'], port_security=['fa:16:3e:7b:ee:2a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4af1978a-81d5-4487-b5a2-07917afc796f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28896be470ca44d887bb24e9da819ee1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01e82fcf-c326-4345-a87f-a3e7a709dc13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b072f30-6c16-4a55-8964-a24e67279145, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.750 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 in datapath 691da22e-0a6a-44ed-b98e-b631dbd59fb2 unbound from our chassis#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.751 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691da22e-0a6a-44ed-b98e-b631dbd59fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.752 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdd2db2-f1ab-4bc5-908c-0937fe93ff36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.752 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2 namespace which is not needed anymore#033[00m
Feb  2 12:45:27 np0005605476 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb  2 12:45:27 np0005605476 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 6.519s CPU time.
Feb  2 12:45:27 np0005605476 systemd-machined[208080]: Machine qemu-1-instance-00000001 terminated.
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [NOTICE]   (246931) : haproxy version is 2.8.14-c23fe91
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [NOTICE]   (246931) : path to executable is /usr/sbin/haproxy
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [WARNING]  (246931) : Exiting Master process...
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [WARNING]  (246931) : Exiting Master process...
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [ALERT]    (246931) : Current worker (246947) exited with code 143 (Terminated)
Feb  2 12:45:27 np0005605476 neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2[246927]: [WARNING]  (246931) : All workers exited. Exiting... (0)
Feb  2 12:45:27 np0005605476 systemd[1]: libpod-b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee.scope: Deactivated successfully.
Feb  2 12:45:27 np0005605476 podman[247193]: 2026-02-02 17:45:27.856603417 +0000 UTC m=+0.034353947 container died b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.877 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee-userdata-shm.mount: Deactivated successfully.
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.884 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-885597d3bae1c6bf720135c9d69b54d853cd75f71434da979c6f665f67de0e21-merged.mount: Deactivated successfully.
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.895 239853 INFO nova.virt.libvirt.driver [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Instance destroyed successfully.#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.896 239853 DEBUG nova.objects.instance [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lazy-loading 'resources' on Instance uuid 4af1978a-81d5-4487-b5a2-07917afc796f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:27 np0005605476 podman[247193]: 2026-02-02 17:45:27.901171238 +0000 UTC m=+0.078921738 container cleanup b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.911 239853 DEBUG nova.virt.libvirt.vif [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1377744495',display_name='tempest-VolumesActionsTest-instance-1377744495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1377744495',id=1,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:45:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28896be470ca44d887bb24e9da819ee1',ramdisk_id='',reservation_id='r-5empzgju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1730916975',owner_user_name='tempest-VolumesActionsTest-1730916975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:45:21Z,user_data=None,user_id='f2b1366a8ee34a0e9437bb253f37a284',uuid=4af1978a-81d5-4487-b5a2-07917afc796f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.912 239853 DEBUG nova.network.os_vif_util [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converting VIF {"id": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "address": "fa:16:3e:7b:ee:2a", "network": {"id": "691da22e-0a6a-44ed-b98e-b631dbd59fb2", "bridge": "br-int", "label": "tempest-VolumesActionsTest-923176375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28896be470ca44d887bb24e9da819ee1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6fc3d8-4a", "ovs_interfaceid": "5a6fc3d8-4a79-4675-8e70-3199ef6a61e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.913 239853 DEBUG nova.network.os_vif_util [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.913 239853 DEBUG os_vif [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:45:27 np0005605476 systemd[1]: libpod-conmon-b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee.scope: Deactivated successfully.
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.916 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.916 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a6fc3d8-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.918 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.920 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.923 239853 INFO os_vif [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ee:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6fc3d8-4a79-4675-8e70-3199ef6a61e3,network=Network(691da22e-0a6a-44ed-b98e-b631dbd59fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6fc3d8-4a')#033[00m
Feb  2 12:45:27 np0005605476 podman[247229]: 2026-02-02 17:45:27.971477435 +0000 UTC m=+0.049442818 container remove b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.977 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e6517748-aced-4b72-9cc1-2e628d7b00c2]: (4, ('Mon Feb  2 05:45:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2 (b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee)\nb24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee\nMon Feb  2 05:45:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2 (b24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee)\nb24dd17c18ed96d67eda93cd74e0aab675422fdd4744f7a9a3215a6aa83ab3ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.978 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc8029c-62cb-4d58-b8ef-74855729c5a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.980 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691da22e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:27 np0005605476 kernel: tap691da22e-00: left promiscuous mode
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.982 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.984 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:27.987 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9a2036-154b-4e95-bbb1-9beea21f1fb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:27 np0005605476 nova_compute[239846]: 2026-02-02 17:45:27.993 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:28.001 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb7652f-d46c-4522-8877-f8def124bc79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:28.002 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c82ece-1081-4168-a8b3-b2db654efc83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:28.018 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[17bc98d8-2a7d-4a78-ae78-f5bcdb511720]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 356311, 'reachable_time': 36832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247261, 'error': None, 'target': 'ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:28 np0005605476 systemd[1]: run-netns-ovnmeta\x2d691da22e\x2d0a6a\x2d44ed\x2db98e\x2db631dbd59fb2.mount: Deactivated successfully.
Feb  2 12:45:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:28.021 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-691da22e-0a6a-44ed-b98e-b631dbd59fb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:45:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:28.021 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[097e98cd-30e7-45cb-a0c4-89630fa73fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:28 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.182 239853 INFO nova.virt.libvirt.driver [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Deleting instance files /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f_del#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.183 239853 INFO nova.virt.libvirt.driver [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Deletion of /var/lib/nova/instances/4af1978a-81d5-4487-b5a2-07917afc796f_del complete#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.248 239853 INFO nova.compute.manager [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.248 239853 DEBUG oslo.service.loopingcall [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.249 239853 DEBUG nova.compute.manager [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.249 239853 DEBUG nova.network.neutron [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.524 239853 DEBUG nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-unplugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.525 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.525 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.526 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.526 239853 DEBUG nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] No waiting events found dispatching network-vif-unplugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.526 239853 DEBUG nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-unplugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.527 239853 DEBUG nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.527 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.528 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.528 239853 DEBUG oslo_concurrency.lockutils [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.528 239853 DEBUG nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] No waiting events found dispatching network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:28 np0005605476 nova_compute[239846]: 2026-02-02 17:45:28.529 239853 WARNING nova.compute.manager [req-3af03e53-a3ed-4b08-ba37-96afdb4c5ac3 req-f52503df-c251-4bb6-a646-673af5b31d8d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received unexpected event network-vif-plugged-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:45:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb  2 12:45:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb  2 12:45:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.262 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.263 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:45:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 85 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 31 KiB/s wr, 303 op/s
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.360 239853 DEBUG nova.network.neutron [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.378 239853 INFO nova.compute.manager [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Took 1.13 seconds to deallocate network for instance.#033[00m
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550896169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550896169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.446 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.447 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.453 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:29 np0005605476 nova_compute[239846]: 2026-02-02 17:45:29.492 239853 DEBUG oslo_concurrency.processutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb  2 12:45:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb  2 12:45:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989915814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.075 239853 DEBUG oslo_concurrency.processutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.080 239853 DEBUG nova.compute.provider_tree [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.107 239853 DEBUG nova.scheduler.client.report [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.140 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.175 239853 INFO nova.scheduler.client.report [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Deleted allocations for instance 4af1978a-81d5-4487-b5a2-07917afc796f#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.279 239853 DEBUG oslo_concurrency.lockutils [None req-67acb3d3-65f5-442c-8ca5-9ce999a973d6 f2b1366a8ee34a0e9437bb253f37a284 28896be470ca44d887bb24e9da819ee1 - - default default] Lock "4af1978a-81d5-4487-b5a2-07917afc796f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:30 np0005605476 nova_compute[239846]: 2026-02-02 17:45:30.873 239853 DEBUG nova.compute.manager [req-0acd77c6-2042-49b0-8acd-05a491c2a753 req-c88fa5db-1329-4ddf-abd7-cfee7d1b5df0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Received event network-vif-deleted-5a6fc3d8-4a79-4675-8e70-3199ef6a61e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.260 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.260 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.260 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.261 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.261 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 295 op/s
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.346 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.347 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.368 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.431 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.432 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.439 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.439 239853 INFO nova.compute.claims [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.720 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535509601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.829 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.980 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.982 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4822MB free_disk=59.97084723133594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:45:31 np0005605476 nova_compute[239846]: 2026-02-02 17:45:31.982 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946867783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.270 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.274 239853 DEBUG nova.compute.provider_tree [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.296 239853 DEBUG nova.scheduler.client.report [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.319 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.320 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.322 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.399 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.399 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.412 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 86fbc85b-f1cc-49be-89af-67cf46390288 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.412 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.412 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.418 239853 INFO nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.437 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.461 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.530 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.532 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.532 239853 INFO nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Creating image(s)#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.552 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.575 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.597 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.600 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.648 239853 DEBUG nova.policy [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '067cb133f5004edda930844c63f37aad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54713476150d4f62beed2a2d89131f2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.661 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.662 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.663 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.663 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.685 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.689 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 86fbc85b-f1cc-49be-89af-67cf46390288_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.885 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 86fbc85b-f1cc-49be-89af-67cf46390288_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.938 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.944 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] resizing rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:45:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159620426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.975 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:32 np0005605476 nova_compute[239846]: 2026-02-02 17:45:32.980 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.025 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.033 239853 DEBUG nova.objects.instance [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'migration_context' on Instance uuid 86fbc85b-f1cc-49be-89af-67cf46390288 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.052 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.052 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Ensure instance console log exists: /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.053 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.053 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.054 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.059 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.059 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540887215' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540887215' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 7.5 KiB/s wr, 191 op/s
Feb  2 12:45:33 np0005605476 nova_compute[239846]: 2026-02-02 17:45:33.354 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Successfully created port: 48ba5e3a-1352-404e-85a4-ef691f57e90b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.296 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Successfully updated port: 48ba5e3a-1352-404e-85a4-ef691f57e90b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.330 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.330 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquired lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.331 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.454 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.485 239853 DEBUG nova.compute.manager [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-changed-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.485 239853 DEBUG nova.compute.manager [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Refreshing instance network info cache due to event network-changed-48ba5e3a-1352-404e-85a4-ef691f57e90b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.486 239853 DEBUG oslo_concurrency.lockutils [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:45:34 np0005605476 nova_compute[239846]: 2026-02-02 17:45:34.555 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:45:34 np0005605476 podman[247522]: 2026-02-02 17:45:34.608359588 +0000 UTC m=+0.051225947 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:45:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb  2 12:45:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb  2 12:45:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.058 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.059 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.059 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:45:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 69 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 125 op/s
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.772 239853 DEBUG nova.network.neutron [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Updating instance_info_cache with network_info: [{"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.791 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Releasing lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.792 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Instance network_info: |[{"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.793 239853 DEBUG oslo_concurrency.lockutils [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.794 239853 DEBUG nova.network.neutron [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Refreshing network info cache for port 48ba5e3a-1352-404e-85a4-ef691f57e90b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.800 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Start _get_guest_xml network_info=[{"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.808 239853 WARNING nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.821 239853 DEBUG nova.virt.libvirt.host [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.822 239853 DEBUG nova.virt.libvirt.host [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.828 239853 DEBUG nova.virt.libvirt.host [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.829 239853 DEBUG nova.virt.libvirt.host [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.830 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.830 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.831 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.832 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.832 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.833 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.833 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.833 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.834 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.834 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.835 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.835 239853 DEBUG nova.virt.hardware [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:45:35 np0005605476 nova_compute[239846]: 2026-02-02 17:45:35.840 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761454358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:36 np0005605476 nova_compute[239846]: 2026-02-02 17:45:36.434 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:36 np0005605476 nova_compute[239846]: 2026-02-02 17:45:36.462 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:36 np0005605476 nova_compute[239846]: 2026-02-02 17:45:36.466 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:45:36
Feb  2 12:45:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:45:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:45:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Feb  2 12:45:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670937082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670937082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:45:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000125168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.051 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.053 239853 DEBUG nova.virt.libvirt.vif [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1009074190',display_name='tempest-VolumesActionsTest-instance-1009074190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1009074190',id=3,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-qhxke0og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:32Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=86fbc85b-f1cc-49be-89af-67cf46390288,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.054 239853 DEBUG nova.network.os_vif_util [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.055 239853 DEBUG nova.network.os_vif_util [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.056 239853 DEBUG nova.objects.instance [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 86fbc85b-f1cc-49be-89af-67cf46390288 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.079 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <uuid>86fbc85b-f1cc-49be-89af-67cf46390288</uuid>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <name>instance-00000003</name>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesActionsTest-instance-1009074190</nova:name>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:45:35</nova:creationTime>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:user uuid="067cb133f5004edda930844c63f37aad">tempest-VolumesActionsTest-1170802853-project-member</nova:user>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:project uuid="54713476150d4f62beed2a2d89131f2b">tempest-VolumesActionsTest-1170802853</nova:project>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <nova:port uuid="48ba5e3a-1352-404e-85a4-ef691f57e90b">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="serial">86fbc85b-f1cc-49be-89af-67cf46390288</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="uuid">86fbc85b-f1cc-49be-89af-67cf46390288</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/86fbc85b-f1cc-49be-89af-67cf46390288_disk">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/86fbc85b-f1cc-49be-89af-67cf46390288_disk.config">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:63:61:dd"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <target dev="tap48ba5e3a-13"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/console.log" append="off"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:45:37 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:45:37 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:45:37 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:45:37 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.080 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Preparing to wait for external event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.081 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.081 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.081 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.082 239853 DEBUG nova.virt.libvirt.vif [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:45:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1009074190',display_name='tempest-VolumesActionsTest-instance-1009074190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1009074190',id=3,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-qhxke0og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:45:32Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=86fbc85b-f1cc-49be-89af-67cf46390288,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.083 239853 DEBUG nova.network.os_vif_util [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.084 239853 DEBUG nova.network.os_vif_util [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.084 239853 DEBUG os_vif [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.085 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.085 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.086 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.089 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.089 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48ba5e3a-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.090 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap48ba5e3a-13, col_values=(('external_ids', {'iface-id': '48ba5e3a-1352-404e-85a4-ef691f57e90b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:61:dd', 'vm-uuid': '86fbc85b-f1cc-49be-89af-67cf46390288'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.091 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:37 np0005605476 NetworkManager[49022]: <info>  [1770054337.0924] manager: (tap48ba5e3a-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.094 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.097 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.098 239853 INFO os_vif [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13')#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.151 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.151 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.152 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] No VIF found with MAC fa:16:3e:63:61:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.152 239853 INFO nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Using config drive#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.169 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 88 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.7 MiB/s wr, 153 op/s
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:45:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:45:37 np0005605476 podman[247623]: 2026-02-02 17:45:37.675438479 +0000 UTC m=+0.121656537 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.778 239853 INFO nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Creating config drive at /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.782 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl48qie7l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.817 239853 DEBUG nova.network.neutron [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Updated VIF entry in instance network info cache for port 48ba5e3a-1352-404e-85a4-ef691f57e90b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.818 239853 DEBUG nova.network.neutron [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Updating instance_info_cache with network_info: [{"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.833 239853 DEBUG oslo_concurrency.lockutils [req-9bc8c76b-8adb-457e-b7f4-79b22017d214 req-3eab8155-c392-4913-abd7-2039f6d4fd18 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-86fbc85b-f1cc-49be-89af-67cf46390288" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.899 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl48qie7l" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.928 239853 DEBUG nova.storage.rbd_utils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] rbd image 86fbc85b-f1cc-49be-89af-67cf46390288_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:45:37 np0005605476 nova_compute[239846]: 2026-02-02 17:45:37.932 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config 86fbc85b-f1cc-49be-89af-67cf46390288_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.077 239853 DEBUG oslo_concurrency.processutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config 86fbc85b-f1cc-49be-89af-67cf46390288_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.078 239853 INFO nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Deleting local config drive /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288/disk.config because it was imported into RBD.#033[00m
Feb  2 12:45:38 np0005605476 kernel: tap48ba5e3a-13: entered promiscuous mode
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.1230] manager: (tap48ba5e3a-13): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Feb  2 12:45:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:38Z|00052|binding|INFO|Claiming lport 48ba5e3a-1352-404e-85a4-ef691f57e90b for this chassis.
Feb  2 12:45:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:38Z|00053|binding|INFO|48ba5e3a-1352-404e-85a4-ef691f57e90b: Claiming fa:16:3e:63:61:dd 10.100.0.11
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.125 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.132 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:61:dd 10.100.0.11'], port_security=['fa:16:3e:63:61:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '86fbc85b-f1cc-49be-89af-67cf46390288', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=48ba5e3a-1352-404e-85a4-ef691f57e90b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.133 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 48ba5e3a-1352-404e-85a4-ef691f57e90b in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d bound to our chassis#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.134 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d#033[00m
Feb  2 12:45:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:38Z|00054|binding|INFO|Setting lport 48ba5e3a-1352-404e-85a4-ef691f57e90b ovn-installed in OVS
Feb  2 12:45:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:38Z|00055|binding|INFO|Setting lport 48ba5e3a-1352-404e-85a4-ef691f57e90b up in Southbound
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.141 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.143 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6912a961-8858-4877-9bab-6504b87e2175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.144 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0e2bcc8-d1 in ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.146 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0e2bcc8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.146 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[72c4ad7f-1518-4d9b-946f-87648c1e9153]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.147 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c21c5a65-c4a0-4fdb-8f27-01a597949100]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 systemd-udevd[247705]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.157 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e4ac7b-9195-4d9e-9d33-a36f3e5a8d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 systemd-machined[208080]: New machine qemu-3-instance-00000003.
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.1709] device (tap48ba5e3a-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.1719] device (tap48ba5e3a-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.172 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[264b1d3c-ad56-44ec-9b5f-e69ae6518ca0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.195 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3e94af-f0b2-432e-8d3e-dfaab65cece2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.2028] manager: (tapb0e2bcc8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.202 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[10ffa0e9-68d3-4a39-bd37-56608394dfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.231 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[db1080b7-1fbc-4572-9d9f-ebbb5654cd41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.235 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[96ed0162-ecfc-476b-8319-9950b39ee553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.2502] device (tapb0e2bcc8-d0): carrier: link connected
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.253 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d584c001-9783-429e-86d6-d47d6e48522e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.265 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[12131c8e-c609-4000-9fd6-cc22d18823b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0e2bcc8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:3d:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 357958, 'reachable_time': 25569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247736, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.275 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[54da040d-4aa5-43d4-8f35-db308649d059]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:3de2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 357958, 'tstamp': 357958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247737, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.283 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c3912d-9458-4543-ac42-ed04102627bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0e2bcc8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:3d:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 357958, 'reachable_time': 25569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247738, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.298 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa0b392-d23d-4ad2-845f-15973aac0cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.323 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ab7e44-106d-4ac7-b9d4-632682d41ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.325 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0e2bcc8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.325 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.326 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0e2bcc8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.327 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 NetworkManager[49022]: <info>  [1770054338.3284] manager: (tapb0e2bcc8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb  2 12:45:38 np0005605476 kernel: tapb0e2bcc8-d0: entered promiscuous mode
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.330 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.331 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0e2bcc8-d0, col_values=(('external_ids', {'iface-id': '8321b436-a113-451e-be67-58eea0929a06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.332 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:38Z|00056|binding|INFO|Releasing lport 8321b436-a113-451e-be67-58eea0929a06 from this chassis (sb_readonly=0)
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.333 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.335 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.337 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.337 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab2874-6f8d-407a-8713-a20116d0972e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.338 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.pid.haproxy
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:45:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:38.340 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'env', 'PROCESS_TAG=haproxy-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.436 239853 DEBUG nova.compute.manager [req-b819485c-7950-4658-b857-e01033a5fedf req-f9923fe9-e77e-464d-8084-95903045bbb3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.437 239853 DEBUG oslo_concurrency.lockutils [req-b819485c-7950-4658-b857-e01033a5fedf req-f9923fe9-e77e-464d-8084-95903045bbb3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.437 239853 DEBUG oslo_concurrency.lockutils [req-b819485c-7950-4658-b857-e01033a5fedf req-f9923fe9-e77e-464d-8084-95903045bbb3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.437 239853 DEBUG oslo_concurrency.lockutils [req-b819485c-7950-4658-b857-e01033a5fedf req-f9923fe9-e77e-464d-8084-95903045bbb3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.438 239853 DEBUG nova.compute.manager [req-b819485c-7950-4658-b857-e01033a5fedf req-f9923fe9-e77e-464d-8084-95903045bbb3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Processing event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.543 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054338.5428092, 86fbc85b-f1cc-49be-89af-67cf46390288 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.543 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] VM Started (Lifecycle Event)#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.545 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.548 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.550 239853 INFO nova.virt.libvirt.driver [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Instance spawned successfully.#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.551 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.562 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.566 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.569 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.570 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.570 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.571 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.571 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.572 239853 DEBUG nova.virt.libvirt.driver [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.590 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.591 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054338.5428991, 86fbc85b-f1cc-49be-89af-67cf46390288 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.591 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.617 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.620 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054338.5475335, 86fbc85b-f1cc-49be-89af-67cf46390288 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.620 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.628 239853 INFO nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Took 6.10 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.629 239853 DEBUG nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:38 np0005605476 podman[247811]: 2026-02-02 17:45:38.630921814 +0000 UTC m=+0.040751985 container create b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.640 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.643 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:45:38 np0005605476 systemd[1]: Started libpod-conmon-b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931.scope.
Feb  2 12:45:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:45:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9c3d4ab3735b860545ca05886e081cb5685766a9225323f5a765e6c52bce97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:45:38 np0005605476 podman[247811]: 2026-02-02 17:45:38.689533875 +0000 UTC m=+0.099364096 container init b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:45:38 np0005605476 podman[247811]: 2026-02-02 17:45:38.69366534 +0000 UTC m=+0.103495521 container start b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:45:38 np0005605476 podman[247811]: 2026-02-02 17:45:38.608384536 +0000 UTC m=+0.018214727 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:45:38 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [NOTICE]   (247830) : New worker (247832) forked
Feb  2 12:45:38 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [NOTICE]   (247830) : Loading success.
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.772 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.794 239853 INFO nova.compute.manager [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Took 7.39 seconds to build instance.#033[00m
Feb  2 12:45:38 np0005605476 nova_compute[239846]: 2026-02-02 17:45:38.811 239853 DEBUG oslo_concurrency.lockutils [None req-3d167d53-8a3b-41b0-a762-e785c64b7d30 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 88 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 2.3 MiB/s wr, 122 op/s
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.456 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.891 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653102152' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.892 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.893 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.893 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653102152' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.893 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.895 239853 INFO nova.compute.manager [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Terminating instance#033[00m
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.896 239853 DEBUG nova.compute.manager [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:45:39 np0005605476 kernel: tap48ba5e3a-13 (unregistering): left promiscuous mode
Feb  2 12:45:39 np0005605476 NetworkManager[49022]: <info>  [1770054339.9291] device (tap48ba5e3a-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.929 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:39Z|00057|binding|INFO|Releasing lport 48ba5e3a-1352-404e-85a4-ef691f57e90b from this chassis (sb_readonly=0)
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.934 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:39Z|00058|binding|INFO|Setting lport 48ba5e3a-1352-404e-85a4-ef691f57e90b down in Southbound
Feb  2 12:45:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:45:39Z|00059|binding|INFO|Removing iface tap48ba5e3a-13 ovn-installed in OVS
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:39.943 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:61:dd 10.100.0.11'], port_security=['fa:16:3e:63:61:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '86fbc85b-f1cc-49be-89af-67cf46390288', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54713476150d4f62beed2a2d89131f2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f740194-ee22-4f7b-a04b-7f9012a4aa6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54f0ccd7-fdc2-44a8-95da-88fb6d6d99fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=48ba5e3a-1352-404e-85a4-ef691f57e90b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:45:39 np0005605476 nova_compute[239846]: 2026-02-02 17:45:39.944 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:39.947 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 48ba5e3a-1352-404e-85a4-ef691f57e90b in datapath b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d unbound from our chassis#033[00m
Feb  2 12:45:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:39.949 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:45:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:39.950 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae9bf82-6f7e-49ed-806a-0034062371df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:39.951 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d namespace which is not needed anymore#033[00m
Feb  2 12:45:39 np0005605476 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb  2 12:45:39 np0005605476 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1.802s CPU time.
Feb  2 12:45:39 np0005605476 systemd-machined[208080]: Machine qemu-3-instance-00000003 terminated.
Feb  2 12:45:40 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [NOTICE]   (247830) : haproxy version is 2.8.14-c23fe91
Feb  2 12:45:40 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [NOTICE]   (247830) : path to executable is /usr/sbin/haproxy
Feb  2 12:45:40 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [WARNING]  (247830) : Exiting Master process...
Feb  2 12:45:40 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [ALERT]    (247830) : Current worker (247832) exited with code 143 (Terminated)
Feb  2 12:45:40 np0005605476 neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d[247826]: [WARNING]  (247830) : All workers exited. Exiting... (0)
Feb  2 12:45:40 np0005605476 systemd[1]: libpod-b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931.scope: Deactivated successfully.
Feb  2 12:45:40 np0005605476 podman[247862]: 2026-02-02 17:45:40.074506586 +0000 UTC m=+0.044295214 container died b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:45:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931-userdata-shm.mount: Deactivated successfully.
Feb  2 12:45:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0b9c3d4ab3735b860545ca05886e081cb5685766a9225323f5a765e6c52bce97-merged.mount: Deactivated successfully.
Feb  2 12:45:40 np0005605476 podman[247862]: 2026-02-02 17:45:40.106150416 +0000 UTC m=+0.075939044 container cleanup b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb  2 12:45:40 np0005605476 NetworkManager[49022]: <info>  [1770054340.1120] manager: (tap48ba5e3a-13): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb  2 12:45:40 np0005605476 systemd[1]: libpod-conmon-b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931.scope: Deactivated successfully.
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.139 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054325.1391962, 1839664c-7601-4228-8383-be2631448879 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.140 239853 INFO nova.compute.manager [-] [instance: 1839664c-7601-4228-8383-be2631448879] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.146 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.151 239853 INFO nova.virt.libvirt.driver [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Instance destroyed successfully.#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.151 239853 DEBUG nova.objects.instance [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lazy-loading 'resources' on Instance uuid 86fbc85b-f1cc-49be-89af-67cf46390288 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.158 239853 DEBUG nova.compute.manager [None req-d9b2fe4a-395d-4a52-8c35-266f5ee42981 - - - - - -] [instance: 1839664c-7601-4228-8383-be2631448879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:40 np0005605476 podman[247898]: 2026-02-02 17:45:40.158984807 +0000 UTC m=+0.036642381 container remove b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.160 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ebbbf9-68f2-4e8a-82dd-e149ea97cbdd]: (4, ('Mon Feb  2 05:45:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d (b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931)\nb618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931\nMon Feb  2 05:45:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d (b618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931)\nb618e636ce0280b70dea70df8fada79b12f35802c4017339a9a388e9bec17931\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.161 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a087b56-6bbd-407a-997d-12c4f1efdd7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.161 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0e2bcc8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.162 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 kernel: tapb0e2bcc8-d0: left promiscuous mode
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.165 239853 DEBUG nova.virt.libvirt.vif [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:45:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1009074190',display_name='tempest-VolumesActionsTest-instance-1009074190',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1009074190',id=3,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:45:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54713476150d4f62beed2a2d89131f2b',ramdisk_id='',reservation_id='r-qhxke0og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1170802853',owner_user_name='tempest-VolumesActionsTest-1170802853-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:45:38Z,user_data=None,user_id='067cb133f5004edda930844c63f37aad',uuid=86fbc85b-f1cc-49be-89af-67cf46390288,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.166 239853 DEBUG nova.network.os_vif_util [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converting VIF {"id": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "address": "fa:16:3e:63:61:dd", "network": {"id": "b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d", "bridge": "br-int", "label": "tempest-VolumesActionsTest-7203067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54713476150d4f62beed2a2d89131f2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba5e3a-13", "ovs_interfaceid": "48ba5e3a-1352-404e-85a4-ef691f57e90b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.166 239853 DEBUG nova.network.os_vif_util [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.167 239853 DEBUG os_vif [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.168 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.168 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ba5e3a-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.169 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.171 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.173 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.173 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.175 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[197b0d29-0986-48bd-862a-dad32fb38543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.175 239853 INFO os_vif [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:61:dd,bridge_name='br-int',has_traffic_filtering=True,id=48ba5e3a-1352-404e-85a4-ef691f57e90b,network=Network(b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba5e3a-13')#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.186 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a860eeb-f4a2-44fc-b92c-36f519a96347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.187 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0cea34fb-58fa-49f3-a07e-1f96c7c0af91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.197 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9adf108d-fb1b-4b55-9186-c1bccebaac7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 357952, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247933, 'error': None, 'target': 'ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.199 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0e2bcc8-dbb4-4b4e-ab42-3a2e23a9d08d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:45:40 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:40.199 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[5c96596c-b178-4faf-976b-dabb4833ad3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:45:40 np0005605476 systemd[1]: run-netns-ovnmeta\x2db0e2bcc8\x2ddbb4\x2d4b4e\x2dab42\x2d3a2e23a9d08d.mount: Deactivated successfully.
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.343 239853 INFO nova.virt.libvirt.driver [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Deleting instance files /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288_del#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.344 239853 INFO nova.virt.libvirt.driver [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Deletion of /var/lib/nova/instances/86fbc85b-f1cc-49be-89af-67cf46390288_del complete#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.387 239853 INFO nova.compute.manager [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.388 239853 DEBUG oslo.service.loopingcall [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.389 239853 DEBUG nova.compute.manager [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.390 239853 DEBUG nova.network.neutron [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.530 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.531 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.531 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.532 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.532 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] No waiting events found dispatching network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.532 239853 WARNING nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received unexpected event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.533 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-unplugged-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.533 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.533 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.534 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.534 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] No waiting events found dispatching network-vif-unplugged-48ba5e3a-1352-404e-85a4-ef691f57e90b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.534 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-unplugged-48ba5e3a-1352-404e-85a4-ef691f57e90b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.535 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.535 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.535 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.535 239853 DEBUG oslo_concurrency.lockutils [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.536 239853 DEBUG nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] No waiting events found dispatching network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.536 239853 WARNING nova.compute.manager [req-3b931625-5511-434e-8cd1-d2ca4819b091 req-aae9b0b5-1381-4d63-8276-b11492731025 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received unexpected event network-vif-plugged-48ba5e3a-1352-404e-85a4-ef691f57e90b for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.889 239853 DEBUG nova.network.neutron [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.919 239853 INFO nova.compute.manager [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Took 0.53 seconds to deallocate network for instance.#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.982 239853 DEBUG nova.compute.manager [req-7d80297b-38ec-431e-aab9-5b4e1cb6bf98 req-318d6827-772d-4451-ad73-3e391a3b7bbc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Received event network-vif-deleted-48ba5e3a-1352-404e-85a4-ef691f57e90b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.984 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:40 np0005605476 nova_compute[239846]: 2026-02-02 17:45:40.984 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.028 239853 DEBUG oslo_concurrency.processutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:45:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 69 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Feb  2 12:45:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:45:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326823353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.590 239853 DEBUG oslo_concurrency.processutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.595 239853 DEBUG nova.compute.provider_tree [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.611 239853 DEBUG nova.scheduler.client.report [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.635 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.665 239853 INFO nova.scheduler.client.report [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Deleted allocations for instance 86fbc85b-f1cc-49be-89af-67cf46390288#033[00m
Feb  2 12:45:41 np0005605476 nova_compute[239846]: 2026-02-02 17:45:41.725 239853 DEBUG oslo_concurrency.lockutils [None req-111727d5-2e03-4b12-baaf-e00f9ae5f5fb 067cb133f5004edda930844c63f37aad 54713476150d4f62beed2a2d89131f2b - - default default] Lock "86fbc85b-f1cc-49be-89af-67cf46390288" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3440331133' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3440331133' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:42 np0005605476 nova_compute[239846]: 2026-02-02 17:45:42.888 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054327.8873086, 4af1978a-81d5-4487-b5a2-07917afc796f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:42 np0005605476 nova_compute[239846]: 2026-02-02 17:45:42.889 239853 INFO nova.compute.manager [-] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532701575' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532701575' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:42 np0005605476 nova_compute[239846]: 2026-02-02 17:45:42.923 239853 DEBUG nova.compute.manager [None req-f3a2ed5d-f3a5-4373-8537-aa766134fa4f - - - - - -] [instance: 4af1978a-81d5-4487-b5a2-07917afc796f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3682572543' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3682572543' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 69 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 889 KiB/s wr, 166 op/s
Feb  2 12:45:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2200145660' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2200145660' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:44 np0005605476 nova_compute[239846]: 2026-02-02 17:45:44.458 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:45 np0005605476 nova_compute[239846]: 2026-02-02 17:45:45.170 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 755 KiB/s wr, 249 op/s
Feb  2 12:45:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743025923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743025923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:46.635 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:45:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:46.636 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:45:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:45:46.637 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:45:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:45:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/551565625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:45:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:45:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/551565625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 226 op/s
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.865719506522754e-07 of space, bias 1.0, pg target 0.0002959715851956826 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.1848604918038573e-06 of space, bias 1.0, pg target 0.0009554581475411572 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.824011170057578e-07 of space, bias 1.0, pg target 8.472033510172733e-05 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658724604573902 of space, bias 1.0, pg target 0.19976173813721707 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1268565297212159e-06 of space, bias 4.0, pg target 0.001352227835665459 quantized to 16 (current 16)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:45:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:45:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 145 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 10 MiB/s wr, 256 op/s
Feb  2 12:45:49 np0005605476 nova_compute[239846]: 2026-02-02 17:45:49.460 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:50 np0005605476 nova_compute[239846]: 2026-02-02 17:45:50.203 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:51 np0005605476 nova_compute[239846]: 2026-02-02 17:45:51.105 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 241 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 17 MiB/s wr, 223 op/s
Feb  2 12:45:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 241 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 871 KiB/s rd, 17 MiB/s wr, 169 op/s
Feb  2 12:45:54 np0005605476 nova_compute[239846]: 2026-02-02 17:45:54.462 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:55 np0005605476 nova_compute[239846]: 2026-02-02 17:45:55.151 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054340.1496944, 86fbc85b-f1cc-49be-89af-67cf46390288 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:45:55 np0005605476 nova_compute[239846]: 2026-02-02 17:45:55.151 239853 INFO nova.compute.manager [-] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:45:55 np0005605476 nova_compute[239846]: 2026-02-02 17:45:55.185 239853 DEBUG nova.compute.manager [None req-fdcf878f-6d0c-436e-a5ca-d259527b4aac - - - - - -] [instance: 86fbc85b-f1cc-49be-89af-67cf46390288] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:45:55 np0005605476 nova_compute[239846]: 2026-02-02 17:45:55.206 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 537 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 891 KiB/s rd, 41 MiB/s wr, 202 op/s
Feb  2 12:45:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 577 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 45 MiB/s wr, 112 op/s
Feb  2 12:45:58 np0005605476 ceph-osd[85696]: bluestore.MempoolThread fragmentation_score=0.000150 took=0.000041s
Feb  2 12:45:58 np0005605476 ceph-osd[86737]: bluestore.MempoolThread fragmentation_score=0.000228 took=0.000036s
Feb  2 12:45:58 np0005605476 ceph-osd[87792]: bluestore.MempoolThread fragmentation_score=0.000286 took=0.000046s
Feb  2 12:45:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 673 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 53 MiB/s wr, 96 op/s
Feb  2 12:45:59 np0005605476 nova_compute[239846]: 2026-02-02 17:45:59.464 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:45:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:46:00 np0005605476 nova_compute[239846]: 2026-02-02 17:46:00.241 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.349217049 +0000 UTC m=+0.036595018 container create 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:00 np0005605476 systemd[1]: Started libpod-conmon-911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953.scope.
Feb  2 12:46:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.332508489 +0000 UTC m=+0.019886478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.431385369 +0000 UTC m=+0.118763428 container init 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.441613541 +0000 UTC m=+0.128991510 container start 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:00 np0005605476 keen_liskov[248120]: 167 167
Feb  2 12:46:00 np0005605476 systemd[1]: libpod-911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953.scope: Deactivated successfully.
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.463302637 +0000 UTC m=+0.150680646 container attach 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.463786951 +0000 UTC m=+0.151164950 container died 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:46:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:46:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:46:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:46:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-02e4fd4f30408939814f2a6bfa13aedbb1e43df6bf65bb95b82cd15b410b09e0-merged.mount: Deactivated successfully.
Feb  2 12:46:00 np0005605476 podman[248104]: 2026-02-02 17:46:00.507345099 +0000 UTC m=+0.194723068 container remove 911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:46:00 np0005605476 systemd[1]: libpod-conmon-911940c29c9d31e01cfe517f14fab58ec15f5961a8180db802beb091f1e23953.scope: Deactivated successfully.
Feb  2 12:46:00 np0005605476 podman[248146]: 2026-02-02 17:46:00.692821442 +0000 UTC m=+0.076797864 container create 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:46:00 np0005605476 systemd[1]: Started libpod-conmon-7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c.scope.
Feb  2 12:46:00 np0005605476 podman[248146]: 2026-02-02 17:46:00.645172311 +0000 UTC m=+0.029148763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:00 np0005605476 podman[248146]: 2026-02-02 17:46:00.759133656 +0000 UTC m=+0.143110098 container init 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:00 np0005605476 podman[248146]: 2026-02-02 17:46:00.765400079 +0000 UTC m=+0.149376501 container start 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:00 np0005605476 podman[248146]: 2026-02-02 17:46:00.769287056 +0000 UTC m=+0.153263498 container attach 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:46:01 np0005605476 practical_grothendieck[248162]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:46:01 np0005605476 practical_grothendieck[248162]: --> All data devices are unavailable
Feb  2 12:46:01 np0005605476 systemd[1]: libpod-7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c.scope: Deactivated successfully.
Feb  2 12:46:01 np0005605476 podman[248146]: 2026-02-02 17:46:01.196464248 +0000 UTC m=+0.580440710 container died 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:46:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-50f9bc1c189b9da6c96bd5613f6b3ad5bb7911e45cfe89d2c17fd5642ac0c66c-merged.mount: Deactivated successfully.
Feb  2 12:46:01 np0005605476 podman[248146]: 2026-02-02 17:46:01.24671101 +0000 UTC m=+0.630687432 container remove 7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:46:01 np0005605476 systemd[1]: libpod-conmon-7932a66f75e5e9149446a1c2fd4c1353dd01dbd04c80869b9d08d39c64d88f8c.scope: Deactivated successfully.
Feb  2 12:46:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 897 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 63 MiB/s wr, 91 op/s
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.662635463 +0000 UTC m=+0.042309235 container create 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:46:01 np0005605476 systemd[1]: Started libpod-conmon-821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf.scope.
Feb  2 12:46:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.643527938 +0000 UTC m=+0.023201720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.752203587 +0000 UTC m=+0.131877319 container init 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.762871561 +0000 UTC m=+0.142545293 container start 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.767037376 +0000 UTC m=+0.146711108 container attach 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:46:01 np0005605476 zealous_lichterman[248270]: 167 167
Feb  2 12:46:01 np0005605476 systemd[1]: libpod-821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf.scope: Deactivated successfully.
Feb  2 12:46:01 np0005605476 conmon[248270]: conmon 821347a81f4c4685fdb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf.scope/container/memory.events
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.771822677 +0000 UTC m=+0.151496409 container died 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:46:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5b36f7cc5238289dead2a9a8d4ddff67e7c482ca5655a674ab317a4f5a5008b8-merged.mount: Deactivated successfully.
Feb  2 12:46:01 np0005605476 podman[248254]: 2026-02-02 17:46:01.804483376 +0000 UTC m=+0.184157108 container remove 821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lichterman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:46:01 np0005605476 systemd[1]: libpod-conmon-821347a81f4c4685fdb0e83db8f590b5b371b13fce0a776de530a3d829599fdf.scope: Deactivated successfully.
Feb  2 12:46:01 np0005605476 podman[248294]: 2026-02-02 17:46:01.934812751 +0000 UTC m=+0.040581487 container create 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:46:01 np0005605476 systemd[1]: Started libpod-conmon-11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193.scope.
Feb  2 12:46:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc2153f85481f802ed6aea5eb0447e52ada5774a4a74413f817772eae41c95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc2153f85481f802ed6aea5eb0447e52ada5774a4a74413f817772eae41c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc2153f85481f802ed6aea5eb0447e52ada5774a4a74413f817772eae41c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc2153f85481f802ed6aea5eb0447e52ada5774a4a74413f817772eae41c95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:02.013939808 +0000 UTC m=+0.119708524 container init 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:01.91984876 +0000 UTC m=+0.025617456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:02.022856534 +0000 UTC m=+0.128625260 container start 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:02.026976787 +0000 UTC m=+0.132745513 container attach 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]: {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    "0": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "devices": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "/dev/loop3"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            ],
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_name": "ceph_lv0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_size": "21470642176",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "name": "ceph_lv0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "tags": {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_name": "ceph",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.crush_device_class": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.encrypted": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.objectstore": "bluestore",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_id": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.vdo": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.with_tpm": "0"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            },
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "vg_name": "ceph_vg0"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        }
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    ],
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    "1": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "devices": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "/dev/loop4"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            ],
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_name": "ceph_lv1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_size": "21470642176",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "name": "ceph_lv1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "tags": {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_name": "ceph",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.crush_device_class": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.encrypted": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.objectstore": "bluestore",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_id": "1",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.vdo": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.with_tpm": "0"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            },
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "vg_name": "ceph_vg1"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        }
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    ],
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    "2": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "devices": [
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "/dev/loop5"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            ],
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_name": "ceph_lv2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_size": "21470642176",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "name": "ceph_lv2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "tags": {
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.cluster_name": "ceph",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.crush_device_class": "",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.encrypted": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.objectstore": "bluestore",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osd_id": "2",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.vdo": "0",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:                "ceph.with_tpm": "0"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            },
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "type": "block",
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:            "vg_name": "ceph_vg2"
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:        }
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]:    ]
Feb  2 12:46:02 np0005605476 cranky_bassi[248311]: }
Feb  2 12:46:02 np0005605476 systemd[1]: libpod-11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193.scope: Deactivated successfully.
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:02.307222647 +0000 UTC m=+0.412991343 container died 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:46:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bedc2153f85481f802ed6aea5eb0447e52ada5774a4a74413f817772eae41c95-merged.mount: Deactivated successfully.
Feb  2 12:46:02 np0005605476 podman[248294]: 2026-02-02 17:46:02.348948825 +0000 UTC m=+0.454717531 container remove 11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bassi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:46:02 np0005605476 systemd[1]: libpod-conmon-11a103717f98faeec856c707607a6f8de163cc574cd7466bb946b7ae60b36193.scope: Deactivated successfully.
Feb  2 12:46:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb  2 12:46:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb  2 12:46:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.76354149 +0000 UTC m=+0.040672359 container create 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:46:02 np0005605476 systemd[1]: Started libpod-conmon-13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f.scope.
Feb  2 12:46:02 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.834926904 +0000 UTC m=+0.112057853 container init 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.839333945 +0000 UTC m=+0.116464804 container start 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:46:02 np0005605476 zealous_euler[248411]: 167 167
Feb  2 12:46:02 np0005605476 systemd[1]: libpod-13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f.scope: Deactivated successfully.
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.747789007 +0000 UTC m=+0.024919886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.842560204 +0000 UTC m=+0.119691163 container attach 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.843536441 +0000 UTC m=+0.120667340 container died 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:46:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-902d5c455990a7e03b564d8035aa5c18ca2926f973521732a2e48e178be3ed83-merged.mount: Deactivated successfully.
Feb  2 12:46:02 np0005605476 podman[248395]: 2026-02-02 17:46:02.876801906 +0000 UTC m=+0.153932785 container remove 13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:46:02 np0005605476 systemd[1]: libpod-conmon-13d9690e2012efe6eb704ddee08e797cfeaa16f0281af868be10d4ccade4055f.scope: Deactivated successfully.
Feb  2 12:46:02 np0005605476 podman[248435]: 2026-02-02 17:46:02.99907215 +0000 UTC m=+0.039691473 container create 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:46:03 np0005605476 systemd[1]: Started libpod-conmon-0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952.scope.
Feb  2 12:46:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44eb9381cbd1310756123731295be7d9207bced6df80461eb153f923bad65848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44eb9381cbd1310756123731295be7d9207bced6df80461eb153f923bad65848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44eb9381cbd1310756123731295be7d9207bced6df80461eb153f923bad65848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44eb9381cbd1310756123731295be7d9207bced6df80461eb153f923bad65848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:03.060104429 +0000 UTC m=+0.100723712 container init 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:03.065286382 +0000 UTC m=+0.105905665 container start 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:03.068180791 +0000 UTC m=+0.108800074 container attach 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:02.980137159 +0000 UTC m=+0.020756462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:46:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 897 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 66 MiB/s wr, 77 op/s
Feb  2 12:46:03 np0005605476 lvm[248527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:46:03 np0005605476 lvm[248527]: VG ceph_vg0 finished
Feb  2 12:46:03 np0005605476 lvm[248530]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:46:03 np0005605476 lvm[248530]: VG ceph_vg1 finished
Feb  2 12:46:03 np0005605476 lvm[248532]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:46:03 np0005605476 lvm[248532]: VG ceph_vg2 finished
Feb  2 12:46:03 np0005605476 bold_sanderson[248451]: {}
Feb  2 12:46:03 np0005605476 systemd[1]: libpod-0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952.scope: Deactivated successfully.
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:03.81326798 +0000 UTC m=+0.853887343 container died 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:46:03 np0005605476 systemd[1]: libpod-0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952.scope: Consumed 1.004s CPU time.
Feb  2 12:46:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-44eb9381cbd1310756123731295be7d9207bced6df80461eb153f923bad65848-merged.mount: Deactivated successfully.
Feb  2 12:46:03 np0005605476 podman[248435]: 2026-02-02 17:46:03.859877132 +0000 UTC m=+0.900496425 container remove 0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 12:46:03 np0005605476 systemd[1]: libpod-conmon-0ccef85695aef646dc15a416b8a252e8f9c82c6069ffea3a372a3d2767f89952.scope: Deactivated successfully.
Feb  2 12:46:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:46:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:46:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:46:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:46:04 np0005605476 nova_compute[239846]: 2026-02-02 17:46:04.467 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759098540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759098540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb  2 12:46:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb  2 12:46:05 np0005605476 nova_compute[239846]: 2026-02-02 17:46:05.242 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 59 MiB/s wr, 102 op/s
Feb  2 12:46:05 np0005605476 podman[248572]: 2026-02-02 17:46:05.635951745 +0000 UTC m=+0.084075874 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 12:46:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460333960' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460333960' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 905 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 49 MiB/s wr, 107 op/s
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/559668002' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/559668002' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:08 np0005605476 podman[248593]: 2026-02-02 17:46:08.647003144 +0000 UTC m=+0.094347097 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 12:46:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 457 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 22 MiB/s wr, 124 op/s
Feb  2 12:46:09 np0005605476 nova_compute[239846]: 2026-02-02 17:46:09.470 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.874145) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369874188, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1058, "num_deletes": 258, "total_data_size": 1245929, "memory_usage": 1268288, "flush_reason": "Manual Compaction"}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369882589, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1230464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18615, "largest_seqno": 19672, "table_properties": {"data_size": 1225234, "index_size": 2626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11857, "raw_average_key_size": 19, "raw_value_size": 1214354, "raw_average_value_size": 2027, "num_data_blocks": 116, "num_entries": 599, "num_filter_entries": 599, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054309, "oldest_key_time": 1770054309, "file_creation_time": 1770054369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8507 microseconds, and 4365 cpu microseconds.
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.882649) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1230464 bytes OK
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.882670) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.884246) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.884269) EVENT_LOG_v1 {"time_micros": 1770054369884262, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.884290) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1240773, prev total WAL file size 1240773, number of live WAL files 2.
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.884849) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1201KB)], [41(9173KB)]
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369884933, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10624342, "oldest_snapshot_seqno": -1}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4621 keys, 10500871 bytes, temperature: kUnknown
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369922643, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10500871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10464287, "index_size": 23930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 113209, "raw_average_key_size": 24, "raw_value_size": 10375255, "raw_average_value_size": 2245, "num_data_blocks": 1007, "num_entries": 4621, "num_filter_entries": 4621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.922975) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10500871 bytes
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.924127) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.7 rd, 277.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.0 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(17.2) write-amplify(8.5) OK, records in: 5150, records dropped: 529 output_compression: NoCompression
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.924158) EVENT_LOG_v1 {"time_micros": 1770054369924142, "job": 20, "event": "compaction_finished", "compaction_time_micros": 37849, "compaction_time_cpu_micros": 16378, "output_level": 6, "num_output_files": 1, "total_output_size": 10500871, "num_input_records": 5150, "num_output_records": 4621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369924458, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054369925994, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.884703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.926161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.926170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.926174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.926177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:09.926180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb  2 12:46:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb  2 12:46:10 np0005605476 nova_compute[239846]: 2026-02-02 17:46:10.244 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 88 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 26 MiB/s wr, 254 op/s
Feb  2 12:46:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294315150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294315150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:12 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:12.947 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:46:12 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:12.948 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:46:12 np0005605476 nova_compute[239846]: 2026-02-02 17:46:12.949 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661774795' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661774795' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 88 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 7.0 MiB/s wr, 189 op/s
Feb  2 12:46:14 np0005605476 nova_compute[239846]: 2026-02-02 17:46:14.471 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.890895) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374890932, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 331, "num_deletes": 252, "total_data_size": 133490, "memory_usage": 140880, "flush_reason": "Manual Compaction"}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374893680, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 132108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19673, "largest_seqno": 20003, "table_properties": {"data_size": 129984, "index_size": 288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5748, "raw_average_key_size": 19, "raw_value_size": 125672, "raw_average_value_size": 428, "num_data_blocks": 13, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054369, "oldest_key_time": 1770054369, "file_creation_time": 1770054374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 2859 microseconds, and 1381 cpu microseconds.
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.893746) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 132108 bytes OK
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.893776) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.894841) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.894864) EVENT_LOG_v1 {"time_micros": 1770054374894857, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.894895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 131185, prev total WAL file size 131185, number of live WAL files 2.
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.895466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(129KB)], [44(10MB)]
Feb  2 12:46:14 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374895605, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 10632979, "oldest_snapshot_seqno": -1}
Feb  2 12:46:14 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4399 keys, 7274711 bytes, temperature: kUnknown
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374936810, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7274711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7244199, "index_size": 18434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 109088, "raw_average_key_size": 24, "raw_value_size": 7163580, "raw_average_value_size": 1628, "num_data_blocks": 768, "num_entries": 4399, "num_filter_entries": 4399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.937237) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7274711 bytes
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.938321) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.3 rd, 176.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.0 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(135.6) write-amplify(55.1) OK, records in: 4914, records dropped: 515 output_compression: NoCompression
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.938346) EVENT_LOG_v1 {"time_micros": 1770054374938333, "job": 22, "event": "compaction_finished", "compaction_time_micros": 41320, "compaction_time_cpu_micros": 27112, "output_level": 6, "num_output_files": 1, "total_output_size": 7274711, "num_input_records": 4914, "num_output_records": 4399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374938528, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054374939698, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.895282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.939737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.939744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.939746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.939748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:14 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:46:14.939751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:46:15 np0005605476 nova_compute[239846]: 2026-02-02 17:46:15.246 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 88 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 5.3 MiB/s wr, 232 op/s
Feb  2 12:46:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb  2 12:46:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb  2 12:46:16 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 12:46:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb  2 12:46:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb  2 12:46:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb  2 12:46:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb  2 12:46:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 88 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.3 KiB/s wr, 93 op/s
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335522658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335522658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb  2 12:46:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb  2 12:46:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb  2 12:46:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.5 KiB/s wr, 111 op/s
Feb  2 12:46:19 np0005605476 nova_compute[239846]: 2026-02-02 17:46:19.473 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873672055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873672055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:20 np0005605476 nova_compute[239846]: 2026-02-02 17:46:20.247 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3493967388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3493967388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:20.951 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 8.4 KiB/s wr, 199 op/s
Feb  2 12:46:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 7.0 KiB/s wr, 167 op/s
Feb  2 12:46:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb  2 12:46:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb  2 12:46:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb  2 12:46:24 np0005605476 nova_compute[239846]: 2026-02-02 17:46:24.475 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Feb  2 12:46:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Feb  2 12:46:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Feb  2 12:46:25 np0005605476 nova_compute[239846]: 2026-02-02 17:46:25.250 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 7.1 KiB/s wr, 155 op/s
Feb  2 12:46:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 5.9 KiB/s wr, 122 op/s
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195374348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195374348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb  2 12:46:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb  2 12:46:28 np0005605476 nova_compute[239846]: 2026-02-02 17:46:28.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025397511' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025397511' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:29 np0005605476 nova_compute[239846]: 2026-02-02 17:46:29.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:29 np0005605476 nova_compute[239846]: 2026-02-02 17:46:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:46:29 np0005605476 nova_compute[239846]: 2026-02-02 17:46:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:46:29 np0005605476 nova_compute[239846]: 2026-02-02 17:46:29.258 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:46:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464781049' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464781049' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:29 np0005605476 nova_compute[239846]: 2026-02-02 17:46:29.477 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb  2 12:46:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb  2 12:46:30 np0005605476 nova_compute[239846]: 2026-02-02 17:46:30.252 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.6 KiB/s wr, 153 op/s
Feb  2 12:46:32 np0005605476 nova_compute[239846]: 2026-02-02 17:46:32.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:32 np0005605476 nova_compute[239846]: 2026-02-02 17:46:32.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.267 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.267 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.268 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.268 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.268 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 4.5 KiB/s wr, 124 op/s
Feb  2 12:46:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:46:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3891385936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.780 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.932 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.934 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4791MB free_disk=59.98822948895395GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.934 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:33 np0005605476 nova_compute[239846]: 2026-02-02 17:46:33.935 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.166 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.167 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.182 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.479 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411009143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.721 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.727 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.747 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.771 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:46:34 np0005605476 nova_compute[239846]: 2026-02-02 17:46:34.771 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb  2 12:46:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.305 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075334572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075334572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.8 KiB/s wr, 135 op/s
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.677 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.677 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.717 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.854 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.855 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.861 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:46:35 np0005605476 nova_compute[239846]: 2026-02-02 17:46:35.861 239853 INFO nova.compute.claims [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.034 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1465794000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1465794000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057003527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.578 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.584 239853 DEBUG nova.compute.provider_tree [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:46:36 np0005605476 podman[248689]: 2026-02-02 17:46:36.592303038 +0000 UTC m=+0.046344106 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.606 239853 DEBUG nova.scheduler.client.report [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436877281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436877281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.636 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.636 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.686 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.687 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.703 239853 INFO nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.718 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:46:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:46:36
Feb  2 12:46:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:46:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:46:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta', 'volumes']
Feb  2 12:46:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.770 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.771 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.771 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.802 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.804 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.804 239853 INFO nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Creating image(s)#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.828 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.848 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.867 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.871 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.889 239853 DEBUG nova.policy [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5d5e768af5c3478281bf15a0608b56c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '245d00a049914eb4a92746d5f02785db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.928 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.928 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.929 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.929 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.950 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:36 np0005605476 nova_compute[239846]: 2026-02-02 17:46:36.954 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.159 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.220 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] resizing rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.313 239853 DEBUG nova.objects.instance [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'migration_context' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.326 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.327 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Ensure instance console log exists: /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.327 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.327 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.328 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 6.0 KiB/s wr, 107 op/s
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:46:37 np0005605476 nova_compute[239846]: 2026-02-02 17:46:37.619 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Successfully created port: 30823208-1ce7-439a-ae72-2f638b600a83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:46:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.433 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Successfully updated port: 30823208-1ce7-439a-ae72-2f638b600a83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.448 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.448 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquired lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.448 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.556 239853 DEBUG nova.compute.manager [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-changed-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.556 239853 DEBUG nova.compute.manager [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Refreshing instance network info cache due to event network-changed-30823208-1ce7-439a-ae72-2f638b600a83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.557 239853 DEBUG oslo_concurrency.lockutils [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:46:38 np0005605476 nova_compute[239846]: 2026-02-02 17:46:38.935 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:46:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 100 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 458 KiB/s wr, 33 op/s
Feb  2 12:46:39 np0005605476 nova_compute[239846]: 2026-02-02 17:46:39.481 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:39 np0005605476 podman[248877]: 2026-02-02 17:46:39.645245279 +0000 UTC m=+0.092260589 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:46:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:39Z|00060|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb  2 12:46:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb  2 12:46:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb  2 12:46:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.069 239853 DEBUG nova.network.neutron [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updating instance_info_cache with network_info: [{"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.087 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Releasing lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.088 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Instance network_info: |[{"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.088 239853 DEBUG oslo_concurrency.lockutils [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.088 239853 DEBUG nova.network.neutron [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Refreshing network info cache for port 30823208-1ce7-439a-ae72-2f638b600a83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.091 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Start _get_guest_xml network_info=[{"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.095 239853 WARNING nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.098 239853 DEBUG nova.virt.libvirt.host [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.098 239853 DEBUG nova.virt.libvirt.host [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.101 239853 DEBUG nova.virt.libvirt.host [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.102 239853 DEBUG nova.virt.libvirt.host [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.102 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.102 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.103 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.104 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.104 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.104 239853 DEBUG nova.virt.hardware [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.107 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.306 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:46:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801993760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.641 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.665 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:40 np0005605476 nova_compute[239846]: 2026-02-02 17:46:40.668 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:46:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800992902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.251 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.253 239853 DEBUG nova.virt.libvirt.vif [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-514938494',display_name='tempest-VolumesExtendAttachedTest-instance-514938494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-514938494',id=4,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAILEvPfRLUjBhmUXr7uuO4L+EghvaK2z+tl2kkvHfRBu9cVjbbHddC8JoiMYqGKvu5GaGBbZWluWlVO7JN4R9RIdQRj6S0p3j/ASDBdkT9NE3oVQyN3eeO7h0tz454nbA==',key_name='tempest-keypair-156707880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='245d00a049914eb4a92746d5f02785db',ramdisk_id='',reservation_id='r-mlx5b6o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-229704138',owner_user_name='tempest-VolumesExtendAttachedTest-229704138-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:46:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d5e768af5c3478281bf15a0608b56c8',uuid=bce42bcf-3dfb-42dd-ac7b-84302fd0d448,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.253 239853 DEBUG nova.network.os_vif_util [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converting VIF {"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.254 239853 DEBUG nova.network.os_vif_util [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.255 239853 DEBUG nova.objects.instance [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'pci_devices' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.268 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <uuid>bce42bcf-3dfb-42dd-ac7b-84302fd0d448</uuid>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <name>instance-00000004</name>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-514938494</nova:name>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:46:40</nova:creationTime>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:user uuid="5d5e768af5c3478281bf15a0608b56c8">tempest-VolumesExtendAttachedTest-229704138-project-member</nova:user>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:project uuid="245d00a049914eb4a92746d5f02785db">tempest-VolumesExtendAttachedTest-229704138</nova:project>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <nova:port uuid="30823208-1ce7-439a-ae72-2f638b600a83">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="serial">bce42bcf-3dfb-42dd-ac7b-84302fd0d448</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="uuid">bce42bcf-3dfb-42dd-ac7b-84302fd0d448</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:0a:9c:48"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <target dev="tap30823208-1c"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/console.log" append="off"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:46:41 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:46:41 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:46:41 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:46:41 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.269 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Preparing to wait for external event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.269 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.269 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.269 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.270 239853 DEBUG nova.virt.libvirt.vif [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-514938494',display_name='tempest-VolumesExtendAttachedTest-instance-514938494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-514938494',id=4,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAILEvPfRLUjBhmUXr7uuO4L+EghvaK2z+tl2kkvHfRBu9cVjbbHddC8JoiMYqGKvu5GaGBbZWluWlVO7JN4R9RIdQRj6S0p3j/ASDBdkT9NE3oVQyN3eeO7h0tz454nbA==',key_name='tempest-keypair-156707880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='245d00a049914eb4a92746d5f02785db',ramdisk_id='',reservation_id='r-mlx5b6o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-229704138',owner_user_name='tempest-VolumesExtendAttachedTest-229704138-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:46:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d5e768af5c3478281bf15a0608b56c8',uuid=bce42bcf-3dfb-42dd-ac7b-84302fd0d448,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.270 239853 DEBUG nova.network.os_vif_util [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converting VIF {"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.271 239853 DEBUG nova.network.os_vif_util [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.271 239853 DEBUG os_vif [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.272 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.272 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.272 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.275 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.275 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30823208-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.275 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30823208-1c, col_values=(('external_ids', {'iface-id': '30823208-1ce7-439a-ae72-2f638b600a83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:9c:48', 'vm-uuid': 'bce42bcf-3dfb-42dd-ac7b-84302fd0d448'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.277 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:41 np0005605476 NetworkManager[49022]: <info>  [1770054401.2778] manager: (tap30823208-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.279 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.282 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.284 239853 INFO os_vif [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c')#033[00m
Feb  2 12:46:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 113 op/s
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.452 239853 DEBUG nova.network.neutron [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updated VIF entry in instance network info cache for port 30823208-1ce7-439a-ae72-2f638b600a83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.454 239853 DEBUG nova.network.neutron [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updating instance_info_cache with network_info: [{"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.477 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.478 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.478 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No VIF found with MAC fa:16:3e:0a:9c:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.480 239853 INFO nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Using config drive#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.506 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.513 239853 DEBUG oslo_concurrency.lockutils [req-b6bd1368-c80f-4fb6-a6dc-d267be37a746 req-ea7388a1-6e97-452f-b1b2-0c03932221f7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.962 239853 INFO nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Creating config drive at /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config#033[00m
Feb  2 12:46:41 np0005605476 nova_compute[239846]: 2026-02-02 17:46:41.966 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjhuc6dfs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.087 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjhuc6dfs" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.105 239853 DEBUG nova.storage.rbd_utils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] rbd image bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.108 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.224 239853 DEBUG oslo_concurrency.processutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config bce42bcf-3dfb-42dd-ac7b-84302fd0d448_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.224 239853 INFO nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Deleting local config drive /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448/disk.config because it was imported into RBD.#033[00m
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.2639] manager: (tap30823208-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Feb  2 12:46:42 np0005605476 kernel: tap30823208-1c: entered promiscuous mode
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.264 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:42Z|00061|binding|INFO|Claiming lport 30823208-1ce7-439a-ae72-2f638b600a83 for this chassis.
Feb  2 12:46:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:42Z|00062|binding|INFO|30823208-1ce7-439a-ae72-2f638b600a83: Claiming fa:16:3e:0a:9c:48 10.100.0.10
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.269 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.278 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:9c:48 10.100.0.10'], port_security=['fa:16:3e:0a:9c:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bce42bcf-3dfb-42dd-ac7b-84302fd0d448', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc32931e-6d17-4f91-9e4f-1223c484786e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '245d00a049914eb4a92746d5f02785db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d7b5464-5bd4-487a-95ff-90246a5cbea6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46196ea2-9147-4613-bf26-dfcb5de9fa68, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=30823208-1ce7-439a-ae72-2f638b600a83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.279 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 30823208-1ce7-439a-ae72-2f638b600a83 in datapath dc32931e-6d17-4f91-9e4f-1223c484786e bound to our chassis#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.280 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dc32931e-6d17-4f91-9e4f-1223c484786e#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.288 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[808126ff-3e85-45fa-8e68-9f2d216bf040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.289 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdc32931e-61 in ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.290 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdc32931e-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.290 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[98a27884-2df4-4c90-9303-d0ecacaa2eb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.290 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f583be8c-80e2-4e07-9219-8adb620c9936]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 systemd-machined[208080]: New machine qemu-4-instance-00000004.
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.298 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[483edd18-6b23-4ad2-956b-eb3adb18b22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.299 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.302 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb  2 12:46:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:42Z|00063|binding|INFO|Setting lport 30823208-1ce7-439a-ae72-2f638b600a83 ovn-installed in OVS
Feb  2 12:46:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:42Z|00064|binding|INFO|Setting lport 30823208-1ce7-439a-ae72-2f638b600a83 up in Southbound
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.305 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.306 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[15410497-3457-4e09-9b51-260d51108560]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 systemd-udevd[249042]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.3246] device (tap30823208-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.3255] device (tap30823208-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.327 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[96a1df0c-493f-4160-aef9-03bae04edf11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.331 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b9af5a57-18ee-491c-a0cf-144f884c1304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.3330] manager: (tapdc32931e-60): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.351 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5f4f44-e281-4de6-8a28-c804e1e243ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.354 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[c3feee70-65fa-434a-957e-3fb72c8ca7af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.3674] device (tapdc32931e-60): carrier: link connected
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.369 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[be16e5b0-5865-4f6b-95bf-ca905aa60840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.380 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[677694a8-cade-4d29-973b-745921a5a34e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc32931e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:d4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 364369, 'reachable_time': 21447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249072, 'error': None, 'target': 'ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.391 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[02474e1a-bdf7-473a-846f-5dc8e2339169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:d4f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 364369, 'tstamp': 364369}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249073, 'error': None, 'target': 'ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.409 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b459e-6684-4759-88ef-4386f1cbf50f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc32931e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:d4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 364369, 'reachable_time': 21447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249074, 'error': None, 'target': 'ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.431 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[878ddf12-362d-45ec-a9da-1fc359f5bf32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.488 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[162e72bd-12e7-41cc-a0da-afc6c4d495a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.490 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc32931e-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.491 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.493 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc32931e-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.496 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 kernel: tapdc32931e-60: entered promiscuous mode
Feb  2 12:46:42 np0005605476 NetworkManager[49022]: <info>  [1770054402.4971] manager: (tapdc32931e-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.501 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdc32931e-60, col_values=(('external_ids', {'iface-id': 'ad7f119f-5c28-4418-9b8f-33443b676d87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.504 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:42Z|00065|binding|INFO|Releasing lport ad7f119f-5c28-4418-9b8f-33443b676d87 from this chassis (sb_readonly=0)
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.505 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dc32931e-6d17-4f91-9e4f-1223c484786e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dc32931e-6d17-4f91-9e4f-1223c484786e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.506 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[af4325bf-91c1-49e2-ba1c-e07d6338534f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.508 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-dc32931e-6d17-4f91-9e4f-1223c484786e
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/dc32931e-6d17-4f91-9e4f-1223c484786e.pid.haproxy
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID dc32931e-6d17-4f91-9e4f-1223c484786e
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:46:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:42.509 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e', 'env', 'PROCESS_TAG=haproxy-dc32931e-6d17-4f91-9e4f-1223c484786e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dc32931e-6d17-4f91-9e4f-1223c484786e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.515 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.617 239853 DEBUG nova.compute.manager [req-07754d4e-4a3c-4ab1-b349-2561f0d4856d req-ba81dff5-9ccd-4b75-ba52-0365e854c15b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.619 239853 DEBUG oslo_concurrency.lockutils [req-07754d4e-4a3c-4ab1-b349-2561f0d4856d req-ba81dff5-9ccd-4b75-ba52-0365e854c15b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.620 239853 DEBUG oslo_concurrency.lockutils [req-07754d4e-4a3c-4ab1-b349-2561f0d4856d req-ba81dff5-9ccd-4b75-ba52-0365e854c15b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.620 239853 DEBUG oslo_concurrency.lockutils [req-07754d4e-4a3c-4ab1-b349-2561f0d4856d req-ba81dff5-9ccd-4b75-ba52-0365e854c15b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:42 np0005605476 nova_compute[239846]: 2026-02-02 17:46:42.621 239853 DEBUG nova.compute.manager [req-07754d4e-4a3c-4ab1-b349-2561f0d4856d req-ba81dff5-9ccd-4b75-ba52-0365e854c15b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Processing event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:46:42 np0005605476 podman[249107]: 2026-02-02 17:46:42.836787085 +0000 UTC m=+0.046615804 container create c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:46:42 np0005605476 systemd[1]: Started libpod-conmon-c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98.scope.
Feb  2 12:46:42 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:46:42 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3977d0e549a62e62ec7a91792be8d8c0bc8197c1593020bbe61d85cbe05ef769/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:46:42 np0005605476 podman[249107]: 2026-02-02 17:46:42.808806525 +0000 UTC m=+0.018635254 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:46:42 np0005605476 podman[249107]: 2026-02-02 17:46:42.909577177 +0000 UTC m=+0.119405936 container init c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:46:42 np0005605476 podman[249107]: 2026-02-02 17:46:42.914833422 +0000 UTC m=+0.124662141 container start c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:46:42 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [NOTICE]   (249127) : New worker (249129) forked
Feb  2 12:46:42 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [NOTICE]   (249127) : Loading success.
Feb  2 12:46:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086330277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086330277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 2.5 MiB/s wr, 103 op/s
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.010 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.011 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054404.0097017, bce42bcf-3dfb-42dd-ac7b-84302fd0d448 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.012 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] VM Started (Lifecycle Event)#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.019 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.024 239853 INFO nova.virt.libvirt.driver [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Instance spawned successfully.#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.025 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.060 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.066 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.067 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.068 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.068 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.069 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.069 239853 DEBUG nova.virt.libvirt.driver [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.074 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.116 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.117 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054404.0114632, bce42bcf-3dfb-42dd-ac7b-84302fd0d448 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.117 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.141 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.145 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054404.0175438, bce42bcf-3dfb-42dd-ac7b-84302fd0d448 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.145 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.153 239853 INFO nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Took 7.35 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.153 239853 DEBUG nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.163 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.165 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.201 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.237 239853 INFO nova.compute.manager [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Took 8.40 seconds to build instance.#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.253 239853 DEBUG oslo_concurrency.lockutils [None req-3ff8ff2f-d79b-4fb4-a2e1-3ff9425b69c3 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4072763567' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4072763567' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.483 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.683 239853 DEBUG nova.compute.manager [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.684 239853 DEBUG oslo_concurrency.lockutils [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.684 239853 DEBUG oslo_concurrency.lockutils [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.684 239853 DEBUG oslo_concurrency.lockutils [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.685 239853 DEBUG nova.compute.manager [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] No waiting events found dispatching network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:46:44 np0005605476 nova_compute[239846]: 2026-02-02 17:46:44.685 239853 WARNING nova.compute.manager [req-325fa3c9-6caa-41a5-806a-a20daf52101d req-5a351f06-3b12-4632-a6e9-1082cd64547c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received unexpected event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:46:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986693826' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986693826' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117734365' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117734365' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:46 np0005605476 nova_compute[239846]: 2026-02-02 17:46:46.277 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:46.637 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:46:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:46.638 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:46:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:46:46.639 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1699] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/39)
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1705] device (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <warn>  [1770054407.1706] device (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1711] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/40)
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1713] device (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <warn>  [1770054407.1714] device (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1718] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.169 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1723] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1726] device (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 12:46:47 np0005605476 NetworkManager[49022]: <info>  [1770054407.1728] device (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.198 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:47Z|00066|binding|INFO|Releasing lport ad7f119f-5c28-4418-9b8f-33443b676d87 from this chassis (sb_readonly=0)
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.208 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 2.1 MiB/s wr, 167 op/s
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.419 239853 DEBUG nova.compute.manager [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-changed-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.419 239853 DEBUG nova.compute.manager [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Refreshing instance network info cache due to event network-changed-30823208-1ce7-439a-ae72-2f638b600a83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.420 239853 DEBUG oslo_concurrency.lockutils [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.420 239853 DEBUG oslo_concurrency.lockutils [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:46:47 np0005605476 nova_compute[239846]: 2026-02-02 17:46:47.420 239853 DEBUG nova.network.neutron [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Refreshing network info cache for port 30823208-1ce7-439a-ae72-2f638b600a83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000349649672257665 of space, bias 1.0, pg target 0.1048949016772995 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034962480108727235 of space, bias 1.0, pg target 0.1048874403261817 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.940604484445331e-07 of space, bias 1.0, pg target 8.821813453335993e-05 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659454593607525 of space, bias 1.0, pg target 0.19978363780822575 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0306786891576826e-06 of space, bias 4.0, pg target 0.001236814426989219 quantized to 16 (current 16)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:46:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:46:48 np0005605476 nova_compute[239846]: 2026-02-02 17:46:48.525 239853 DEBUG nova.network.neutron [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updated VIF entry in instance network info cache for port 30823208-1ce7-439a-ae72-2f638b600a83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:46:48 np0005605476 nova_compute[239846]: 2026-02-02 17:46:48.526 239853 DEBUG nova.network.neutron [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updating instance_info_cache with network_info: [{"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:46:48 np0005605476 nova_compute[239846]: 2026-02-02 17:46:48.546 239853 DEBUG oslo_concurrency.lockutils [req-52177ac3-f889-45dd-95ae-884061083eda req-cb15962c-2dfe-4e68-813f-0d1a4cb72594 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-bce42bcf-3dfb-42dd-ac7b-84302fd0d448" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:46:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141540285' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141540285' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 187 op/s
Feb  2 12:46:49 np0005605476 nova_compute[239846]: 2026-02-02 17:46:49.485 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:51 np0005605476 nova_compute[239846]: 2026-02-02 17:46:51.278 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 708 KiB/s wr, 189 op/s
Feb  2 12:46:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 161 op/s
Feb  2 12:46:54 np0005605476 nova_compute[239846]: 2026-02-02 17:46:54.486 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:46:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:55Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:9c:48 10.100.0.10
Feb  2 12:46:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:46:55Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:9c:48 10.100.0.10
Feb  2 12:46:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 159 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.4 MiB/s wr, 190 op/s
Feb  2 12:46:56 np0005605476 nova_compute[239846]: 2026-02-02 17:46:56.280 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3237370558' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3237370558' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 198 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Feb  2 12:46:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:46:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4272588873' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:46:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:46:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4272588873' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:46:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 190 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.8 MiB/s wr, 182 op/s
Feb  2 12:46:59 np0005605476 nova_compute[239846]: 2026-02-02 17:46:59.488 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:46:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:00 np0005605476 nova_compute[239846]: 2026-02-02 17:47:00.516 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:01 np0005605476 nova_compute[239846]: 2026-02-02 17:47:01.282 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb  2 12:47:02 np0005605476 nova_compute[239846]: 2026-02-02 17:47:02.747 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.458 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.458 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.490 239853 DEBUG nova.objects.instance [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'flavor' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.534 239853 INFO nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.552 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.745 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.746 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.747 239853 INFO nova.compute.manager [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Attaching volume 513435fc-d4b0-4ece-b499-886518b73833 to /dev/vdb#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.906 239853 DEBUG os_brick.utils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:47:03 np0005605476 nova_compute[239846]: 2026-02-02 17:47:03.908 239853 INFO oslo.privsep.daemon [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp9qdvacqx/privsep.sock']#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.490 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.691 239853 INFO oslo.privsep.daemon [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.484 249256 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.489 249256 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.491 249256 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.491 249256 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249256#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.694 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[36073340-6364-40f5-8693-4e07c1dedeeb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.785 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.808 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.808 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2c03e3-6aaa-4610-966b-a2372641f28e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.810 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.816 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.816 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0692e145-0aad-4b23-8f0a-b0b969371df0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.819 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.828 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.828 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[1b995391-84d0-43b6-b1cb-13ddd1709c38]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.830 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6d4a6b-0cb4-442c-a2ec-d02dc125ce57]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.831 239853 DEBUG oslo_concurrency.processutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.856 239853 DEBUG oslo_concurrency.processutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.859 239853 DEBUG os_brick.initiator.connectors.lightos [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.860 239853 DEBUG os_brick.initiator.connectors.lightos [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.860 239853 DEBUG os_brick.initiator.connectors.lightos [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.860 239853 DEBUG os_brick.utils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] <== get_connector_properties: return (953ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:47:04 np0005605476 nova_compute[239846]: 2026-02-02 17:47:04.861 239853 DEBUG nova.virt.block_device [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updating existing volume attachment record: 83496fe4-2600-4a47-a614-bd4a018da9e5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016250479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:47:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016250479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.032961437 +0000 UTC m=+0.049022690 container create 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:47:05 np0005605476 systemd[1]: Started libpod-conmon-192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3.scope.
Feb  2 12:47:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.013238864 +0000 UTC m=+0.029300167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.110999974 +0000 UTC m=+0.127061257 container init 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.118253863 +0000 UTC m=+0.134315116 container start 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.121530043 +0000 UTC m=+0.137591336 container attach 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 12:47:05 np0005605476 elated_engelbart[249360]: 167 167
Feb  2 12:47:05 np0005605476 systemd[1]: libpod-192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3.scope: Deactivated successfully.
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.126353376 +0000 UTC m=+0.142414649 container died 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:47:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fb07d565915cde6e3a14833698e359263f60ef4de64cbb30415d5849af396122-merged.mount: Deactivated successfully.
Feb  2 12:47:05 np0005605476 podman[249343]: 2026-02-02 17:47:05.165874683 +0000 UTC m=+0.181935936 container remove 192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_engelbart, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:47:05 np0005605476 systemd[1]: libpod-conmon-192953ea664d91d2b16c282b99f6d5a3608b6bbc3728ed414e45b31cd09ee3c3.scope: Deactivated successfully.
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.312897148 +0000 UTC m=+0.042162011 container create d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:47:05 np0005605476 systemd[1]: Started libpod-conmon-d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f.scope.
Feb  2 12:47:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 131 op/s
Feb  2 12:47:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.292995601 +0000 UTC m=+0.022260484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.403765308 +0000 UTC m=+0.133030191 container init d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.413896737 +0000 UTC m=+0.143161600 container start d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.417779494 +0000 UTC m=+0.147044357 container attach d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:47:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:47:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:05 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:47:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2259368720' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.833 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.834 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.835 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.841 239853 DEBUG nova.objects.instance [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'flavor' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:05 np0005605476 dazzling_taussig[249401]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:47:05 np0005605476 dazzling_taussig[249401]: --> All data devices are unavailable
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.870 239853 DEBUG nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Attempting to attach volume 513435fc-d4b0-4ece-b499-886518b73833 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.875 239853 DEBUG nova.virt.libvirt.guest [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-513435fc-d4b0-4ece-b499-886518b73833">
Feb  2 12:47:05 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:47:05 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:05 np0005605476 nova_compute[239846]:  <serial>513435fc-d4b0-4ece-b499-886518b73833</serial>
Feb  2 12:47:05 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:05 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:47:05 np0005605476 systemd[1]: libpod-d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f.scope: Deactivated successfully.
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.905688357 +0000 UTC m=+0.634953220 container died d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:47:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ad633b15aa9b57a2ac84a6da3c3a75c43417691a991557036150b7051699f9c5-merged.mount: Deactivated successfully.
Feb  2 12:47:05 np0005605476 podman[249384]: 2026-02-02 17:47:05.956770192 +0000 UTC m=+0.686035055 container remove d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:47:05 np0005605476 systemd[1]: libpod-conmon-d28434bf0cab50e2382562ef9d65bc32f414847264e48f59a4b28971c4b93c4f.scope: Deactivated successfully.
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.996 239853 DEBUG nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.997 239853 DEBUG nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.997 239853 DEBUG nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:05 np0005605476 nova_compute[239846]: 2026-02-02 17:47:05.997 239853 DEBUG nova.virt.libvirt.driver [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] No VIF found with MAC fa:16:3e:0a:9c:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:47:06 np0005605476 nova_compute[239846]: 2026-02-02 17:47:06.225 239853 DEBUG oslo_concurrency.lockutils [None req-18b94c70-35c3-499e-ab9a-d5bc5be4ac62 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:06 np0005605476 nova_compute[239846]: 2026-02-02 17:47:06.284 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.385413265 +0000 UTC m=+0.046491880 container create 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb  2 12:47:06 np0005605476 systemd[1]: Started libpod-conmon-267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa.scope.
Feb  2 12:47:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.448587393 +0000 UTC m=+0.109666048 container init 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.454459555 +0000 UTC m=+0.115538190 container start 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.362603858 +0000 UTC m=+0.023682523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.458188417 +0000 UTC m=+0.119267032 container attach 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:47:06 np0005605476 silly_shannon[249531]: 167 167
Feb  2 12:47:06 np0005605476 systemd[1]: libpod-267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa.scope: Deactivated successfully.
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.460169192 +0000 UTC m=+0.121247807 container died 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:47:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9dbb4b1cec6c963768bdafb0d4e2df76f4e54c700084928f463dd30ba09e6dd1-merged.mount: Deactivated successfully.
Feb  2 12:47:06 np0005605476 podman[249515]: 2026-02-02 17:47:06.491421002 +0000 UTC m=+0.152499617 container remove 267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:47:06 np0005605476 systemd[1]: libpod-conmon-267d41b685b601c553a05765036f4da17040654606aa424db26ad4a59cbcffaa.scope: Deactivated successfully.
Feb  2 12:47:06 np0005605476 podman[249555]: 2026-02-02 17:47:06.651003172 +0000 UTC m=+0.047799476 container create f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:47:06 np0005605476 systemd[1]: Started libpod-conmon-f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7.scope.
Feb  2 12:47:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98895326f6047756676d987336a9dab24feaeaf3cb2df986c722838165288750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98895326f6047756676d987336a9dab24feaeaf3cb2df986c722838165288750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98895326f6047756676d987336a9dab24feaeaf3cb2df986c722838165288750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:06 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98895326f6047756676d987336a9dab24feaeaf3cb2df986c722838165288750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:06 np0005605476 podman[249555]: 2026-02-02 17:47:06.634980831 +0000 UTC m=+0.031777125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:06 np0005605476 podman[249555]: 2026-02-02 17:47:06.740921066 +0000 UTC m=+0.137717380 container init f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:47:06 np0005605476 podman[249555]: 2026-02-02 17:47:06.748823643 +0000 UTC m=+0.145619927 container start f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:47:06 np0005605476 podman[249555]: 2026-02-02 17:47:06.752073303 +0000 UTC m=+0.148869587 container attach f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:47:06 np0005605476 podman[249569]: 2026-02-02 17:47:06.762860009 +0000 UTC m=+0.073631956 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]: {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    "0": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "devices": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "/dev/loop3"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            ],
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_name": "ceph_lv0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_size": "21470642176",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "name": "ceph_lv0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "tags": {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_name": "ceph",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.crush_device_class": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.encrypted": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.objectstore": "bluestore",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_id": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.vdo": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.with_tpm": "0"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            },
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "vg_name": "ceph_vg0"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        }
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    ],
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    "1": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "devices": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "/dev/loop4"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            ],
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_name": "ceph_lv1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_size": "21470642176",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "name": "ceph_lv1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "tags": {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_name": "ceph",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.crush_device_class": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.encrypted": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.objectstore": "bluestore",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_id": "1",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.vdo": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.with_tpm": "0"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            },
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "vg_name": "ceph_vg1"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        }
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    ],
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    "2": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "devices": [
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "/dev/loop5"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            ],
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_name": "ceph_lv2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_size": "21470642176",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "name": "ceph_lv2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "tags": {
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.cluster_name": "ceph",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.crush_device_class": "",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.encrypted": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.objectstore": "bluestore",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osd_id": "2",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.vdo": "0",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:                "ceph.with_tpm": "0"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            },
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "type": "block",
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:            "vg_name": "ceph_vg2"
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:        }
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]:    ]
Feb  2 12:47:07 np0005605476 ecstatic_elgamal[249577]: }
Feb  2 12:47:07 np0005605476 systemd[1]: libpod-f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7.scope: Deactivated successfully.
Feb  2 12:47:07 np0005605476 podman[249555]: 2026-02-02 17:47:07.055617514 +0000 UTC m=+0.452413788 container died f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:47:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-98895326f6047756676d987336a9dab24feaeaf3cb2df986c722838165288750-merged.mount: Deactivated successfully.
Feb  2 12:47:07 np0005605476 podman[249555]: 2026-02-02 17:47:07.097880827 +0000 UTC m=+0.494677101 container remove f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:47:07 np0005605476 systemd[1]: libpod-conmon-f1c92ce24d6f9837c1025c049d005bd46e7d4b390dad051b0bb6a910712b51d7.scope: Deactivated successfully.
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 355 KiB/s rd, 2.5 MiB/s wr, 103 op/s
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.541320385 +0000 UTC m=+0.057223315 container create 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:47:07 np0005605476 systemd[1]: Started libpod-conmon-86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018.scope.
Feb  2 12:47:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.518975881 +0000 UTC m=+0.034878861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.61671457 +0000 UTC m=+0.132617560 container init 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.624404731 +0000 UTC m=+0.140307631 container start 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:47:07 np0005605476 stoic_leakey[249690]: 167 167
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.628807082 +0000 UTC m=+0.144710012 container attach 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:47:07 np0005605476 systemd[1]: libpod-86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018.scope: Deactivated successfully.
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.629244254 +0000 UTC m=+0.145147184 container died 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb  2 12:47:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3d31fd2351e78dbb11c39243f3653ca179843368fbf80a9b2b31116588a434d8-merged.mount: Deactivated successfully.
Feb  2 12:47:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb  2 12:47:07 np0005605476 podman[249674]: 2026-02-02 17:47:07.670868619 +0000 UTC m=+0.186771519 container remove 86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leakey, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:47:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb  2 12:47:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb  2 12:47:07 np0005605476 systemd[1]: libpod-conmon-86f109197c2dafb4e942abd214c1afc6faf00d15b6aaacf4dfeab912e3dd1018.scope: Deactivated successfully.
Feb  2 12:47:07 np0005605476 podman[249713]: 2026-02-02 17:47:07.840741863 +0000 UTC m=+0.056554737 container create a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:47:07 np0005605476 systemd[1]: Started libpod-conmon-a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf.scope.
Feb  2 12:47:07 np0005605476 podman[249713]: 2026-02-02 17:47:07.822726897 +0000 UTC m=+0.038539821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:47:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993b18de1a369cab5540b492153769f9766d7c32087bc9181456ec66ebc12da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993b18de1a369cab5540b492153769f9766d7c32087bc9181456ec66ebc12da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993b18de1a369cab5540b492153769f9766d7c32087bc9181456ec66ebc12da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d993b18de1a369cab5540b492153769f9766d7c32087bc9181456ec66ebc12da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:07 np0005605476 podman[249713]: 2026-02-02 17:47:07.940378464 +0000 UTC m=+0.156191438 container init a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:47:07 np0005605476 podman[249713]: 2026-02-02 17:47:07.947032267 +0000 UTC m=+0.162845151 container start a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:47:07 np0005605476 podman[249713]: 2026-02-02 17:47:07.950114052 +0000 UTC m=+0.165926936 container attach a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.056 239853 DEBUG nova.compute.manager [req-507ce970-e81d-498a-b9e2-aedc129385f5 req-d1d519a5-eab8-4510-89e8-b8b168752814 38988b3a450545b7b5dd8e9527a3d695 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event volume-extended-513435fc-d4b0-4ece-b499-886518b73833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.078 239853 DEBUG nova.compute.manager [req-507ce970-e81d-498a-b9e2-aedc129385f5 req-d1d519a5-eab8-4510-89e8-b8b168752814 38988b3a450545b7b5dd8e9527a3d695 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Handling volume-extended event for volume 513435fc-d4b0-4ece-b499-886518b73833 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.121 239853 INFO nova.compute.manager [req-507ce970-e81d-498a-b9e2-aedc129385f5 req-d1d519a5-eab8-4510-89e8-b8b168752814 38988b3a450545b7b5dd8e9527a3d695 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Cinder extended volume 513435fc-d4b0-4ece-b499-886518b73833; extending it to detect new size#033[00m
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.249 239853 DEBUG nova.virt.libvirt.driver [req-507ce970-e81d-498a-b9e2-aedc129385f5 req-d1d519a5-eab8-4510-89e8-b8b168752814 38988b3a450545b7b5dd8e9527a3d695 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Feb  2 12:47:08 np0005605476 lvm[249806]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:47:08 np0005605476 lvm[249806]: VG ceph_vg1 finished
Feb  2 12:47:08 np0005605476 lvm[249805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:47:08 np0005605476 lvm[249805]: VG ceph_vg0 finished
Feb  2 12:47:08 np0005605476 lvm[249808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:47:08 np0005605476 lvm[249808]: VG ceph_vg2 finished
Feb  2 12:47:08 np0005605476 lvm[249809]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:47:08 np0005605476 lvm[249809]: VG ceph_vg0 finished
Feb  2 12:47:08 np0005605476 flamboyant_montalcini[249727]: {}
Feb  2 12:47:08 np0005605476 systemd[1]: libpod-a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf.scope: Deactivated successfully.
Feb  2 12:47:08 np0005605476 systemd[1]: libpod-a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf.scope: Consumed 1.184s CPU time.
Feb  2 12:47:08 np0005605476 podman[249812]: 2026-02-02 17:47:08.832278122 +0000 UTC m=+0.023568489 container died a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:47:08 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d993b18de1a369cab5540b492153769f9766d7c32087bc9181456ec66ebc12da-merged.mount: Deactivated successfully.
Feb  2 12:47:08 np0005605476 podman[249812]: 2026-02-02 17:47:08.863048559 +0000 UTC m=+0.054338926 container remove a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:47:08 np0005605476 systemd[1]: libpod-conmon-a15adf1a17a07296f749465a6d15b18b52d78f5793ab756a77e1e3b793e815bf.scope: Deactivated successfully.
Feb  2 12:47:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:47:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:47:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.985 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:08 np0005605476 nova_compute[239846]: 2026-02-02 17:47:08.986 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.000 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.074 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.074 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.082 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.083 239853 INFO nova.compute.claims [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.248 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 117 KiB/s wr, 64 op/s
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.492 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.509 239853 DEBUG oslo_concurrency.lockutils [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.509 239853 DEBUG oslo_concurrency.lockutils [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.524 239853 INFO nova.compute.manager [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Detaching volume 513435fc-d4b0-4ece-b499-886518b73833#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.625 239853 INFO nova.virt.block_device [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Attempting to driver detach volume 513435fc-d4b0-4ece-b499-886518b73833 from mountpoint /dev/vdb#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.636 239853 DEBUG nova.virt.libvirt.driver [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Attempting to detach device vdb from instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.637 239853 DEBUG nova.virt.libvirt.guest [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-513435fc-d4b0-4ece-b499-886518b73833">
Feb  2 12:47:09 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <serial>513435fc-d4b0-4ece-b499-886518b73833</serial>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:09 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.646 239853 INFO nova.virt.libvirt.driver [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Successfully detached device vdb from instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448 from the persistent domain config.#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.646 239853 DEBUG nova.virt.libvirt.driver [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.647 239853 DEBUG nova.virt.libvirt.guest [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-513435fc-d4b0-4ece-b499-886518b73833">
Feb  2 12:47:09 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <serial>513435fc-d4b0-4ece-b499-886518b73833</serial>
Feb  2 12:47:09 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:47:09 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:09 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.706 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054429.7053497, bce42bcf-3dfb-42dd-ac7b-84302fd0d448 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.708 239853 DEBUG nova.virt.libvirt.driver [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.710 239853 INFO nova.virt.libvirt.driver [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Successfully detached device vdb from instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448 from the live domain config.#033[00m
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704743794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.809 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.814 239853 DEBUG nova.compute.provider_tree [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.832 239853 DEBUG nova.scheduler.client.report [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.851 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.853 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.901 239853 DEBUG nova.objects.instance [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'flavor' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.903 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.903 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.942 239853 INFO nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.958 239853 DEBUG oslo_concurrency.lockutils [None req-e1f6f69c-1947-4583-9669-5fb8c6d798f2 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:09 np0005605476 nova_compute[239846]: 2026-02-02 17:47:09.960 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.058 239853 DEBUG nova.policy [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b2b7987477543268373aac3ffda0c37', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.067 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.069 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.069 239853 INFO nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Creating image(s)#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.101 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.126 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.153 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.159 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.210 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.213 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.213 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.214 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.238 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.243 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.443 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.487 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] resizing rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.550 239853 DEBUG nova.objects.instance [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.565 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.565 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Ensure instance console log exists: /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.566 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.566 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.566 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:10 np0005605476 podman[250025]: 2026-02-02 17:47:10.658798693 +0000 UTC m=+0.106860661 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.728 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Successfully created port: fc3773bb-1860-499c-bf29-6578112f08fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.835 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.836 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.836 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.837 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.837 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.838 239853 INFO nova.compute.manager [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Terminating instance#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.839 239853 DEBUG nova.compute.manager [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:47:10 np0005605476 kernel: tap30823208-1c (unregistering): left promiscuous mode
Feb  2 12:47:10 np0005605476 NetworkManager[49022]: <info>  [1770054430.8910] device (tap30823208-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.910 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:10Z|00067|binding|INFO|Releasing lport 30823208-1ce7-439a-ae72-2f638b600a83 from this chassis (sb_readonly=0)
Feb  2 12:47:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:10Z|00068|binding|INFO|Setting lport 30823208-1ce7-439a-ae72-2f638b600a83 down in Southbound
Feb  2 12:47:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:10Z|00069|binding|INFO|Removing iface tap30823208-1c ovn-installed in OVS
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.912 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.913 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:10.919 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:9c:48 10.100.0.10'], port_security=['fa:16:3e:0a:9c:48 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bce42bcf-3dfb-42dd-ac7b-84302fd0d448', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc32931e-6d17-4f91-9e4f-1223c484786e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '245d00a049914eb4a92746d5f02785db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d7b5464-5bd4-487a-95ff-90246a5cbea6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46196ea2-9147-4613-bf26-dfcb5de9fa68, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=30823208-1ce7-439a-ae72-2f638b600a83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:47:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:10.921 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 30823208-1ce7-439a-ae72-2f638b600a83 in datapath dc32931e-6d17-4f91-9e4f-1223c484786e unbound from our chassis#033[00m
Feb  2 12:47:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:10.922 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc32931e-6d17-4f91-9e4f-1223c484786e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:47:10 np0005605476 nova_compute[239846]: 2026-02-02 17:47:10.923 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:10.924 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2eb76d-5a70-44e0-8171-288a6797a730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:10.925 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e namespace which is not needed anymore#033[00m
Feb  2 12:47:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb  2 12:47:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb  2 12:47:10 np0005605476 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb  2 12:47:10 np0005605476 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 13.735s CPU time.
Feb  2 12:47:10 np0005605476 systemd-machined[208080]: Machine qemu-4-instance-00000004 terminated.
Feb  2 12:47:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb  2 12:47:11 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [NOTICE]   (249127) : haproxy version is 2.8.14-c23fe91
Feb  2 12:47:11 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [NOTICE]   (249127) : path to executable is /usr/sbin/haproxy
Feb  2 12:47:11 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [WARNING]  (249127) : Exiting Master process...
Feb  2 12:47:11 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [ALERT]    (249127) : Current worker (249129) exited with code 143 (Terminated)
Feb  2 12:47:11 np0005605476 neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e[249123]: [WARNING]  (249127) : All workers exited. Exiting... (0)
Feb  2 12:47:11 np0005605476 systemd[1]: libpod-c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98.scope: Deactivated successfully.
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.061 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 podman[250091]: 2026-02-02 17:47:11.064796452 +0000 UTC m=+0.047266491 container died c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.067 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.077 239853 INFO nova.virt.libvirt.driver [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Instance destroyed successfully.#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.078 239853 DEBUG nova.objects.instance [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lazy-loading 'resources' on Instance uuid bce42bcf-3dfb-42dd-ac7b-84302fd0d448 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.095 239853 DEBUG nova.virt.libvirt.vif [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-514938494',display_name='tempest-VolumesExtendAttachedTest-instance-514938494',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-514938494',id=4,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAILEvPfRLUjBhmUXr7uuO4L+EghvaK2z+tl2kkvHfRBu9cVjbbHddC8JoiMYqGKvu5GaGBbZWluWlVO7JN4R9RIdQRj6S0p3j/ASDBdkT9NE3oVQyN3eeO7h0tz454nbA==',key_name='tempest-keypair-156707880',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:46:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='245d00a049914eb4a92746d5f02785db',ramdisk_id='',reservation_id='r-mlx5b6o4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-229704138',owner_user_name='tempest-VolumesExtendAttachedTest-229704138-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:46:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d5e768af5c3478281bf15a0608b56c8',uuid=bce42bcf-3dfb-42dd-ac7b-84302fd0d448,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:47:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98-userdata-shm.mount: Deactivated successfully.
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.097 239853 DEBUG nova.network.os_vif_util [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converting VIF {"id": "30823208-1ce7-439a-ae72-2f638b600a83", "address": "fa:16:3e:0a:9c:48", "network": {"id": "dc32931e-6d17-4f91-9e4f-1223c484786e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-2085587427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "245d00a049914eb4a92746d5f02785db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30823208-1c", "ovs_interfaceid": "30823208-1ce7-439a-ae72-2f638b600a83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.098 239853 DEBUG nova.network.os_vif_util [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.098 239853 DEBUG os_vif [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.100 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.101 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30823208-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3977d0e549a62e62ec7a91792be8d8c0bc8197c1593020bbe61d85cbe05ef769-merged.mount: Deactivated successfully.
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.103 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.106 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.109 239853 INFO os_vif [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:9c:48,bridge_name='br-int',has_traffic_filtering=True,id=30823208-1ce7-439a-ae72-2f638b600a83,network=Network(dc32931e-6d17-4f91-9e4f-1223c484786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30823208-1c')#033[00m
Feb  2 12:47:11 np0005605476 podman[250091]: 2026-02-02 17:47:11.112644408 +0000 UTC m=+0.095114437 container cleanup c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:47:11 np0005605476 systemd[1]: libpod-conmon-c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98.scope: Deactivated successfully.
Feb  2 12:47:11 np0005605476 podman[250139]: 2026-02-02 17:47:11.175973701 +0000 UTC m=+0.043511558 container remove c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.183 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[faccbc00-0a1f-44ef-be9c-114498ff9ff3]: (4, ('Mon Feb  2 05:47:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e (c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98)\nc46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98\nMon Feb  2 05:47:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e (c46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98)\nc46745dd290b8849c872bbc6bfe7ff45d4b3ebe81c6c7b6d350425889e524e98\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.185 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad2a798-cca9-4c46-ae5e-c433a0f2568f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.187 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc32931e-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:11 np0005605476 kernel: tapdc32931e-60: left promiscuous mode
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.190 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.197 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a45edaeb-a1a8-4862-8ea6-e557c8619a17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.204 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.215 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2925a5-0d24-44bc-aef0-fe0d4f5f0350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.218 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef88870-fd6b-41fa-90a9-fd78d098f9df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.236 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1081a6e-65ae-4fdf-af3b-f06b3c48175a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 364365, 'reachable_time': 30032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250164, 'error': None, 'target': 'ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 systemd[1]: run-netns-ovnmeta\x2ddc32931e\x2d6d17\x2d4f91\x2d9e4f\x2d1223c484786e.mount: Deactivated successfully.
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.240 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dc32931e-6d17-4f91-9e4f-1223c484786e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:47:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:11.241 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[4419b5a6-26b7-4ee6-9f0b-0d6c915a03c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 178 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 561 KiB/s wr, 47 op/s
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.386 239853 INFO nova.virt.libvirt.driver [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Deleting instance files /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448_del#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.387 239853 INFO nova.virt.libvirt.driver [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Deletion of /var/lib/nova/instances/bce42bcf-3dfb-42dd-ac7b-84302fd0d448_del complete#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.437 239853 INFO nova.compute.manager [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.438 239853 DEBUG oslo.service.loopingcall [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.438 239853 DEBUG nova.compute.manager [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:47:11 np0005605476 nova_compute[239846]: 2026-02-02 17:47:11.438 239853 DEBUG nova.network.neutron [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.165 239853 DEBUG nova.compute.manager [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-unplugged-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.166 239853 DEBUG oslo_concurrency.lockutils [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.166 239853 DEBUG oslo_concurrency.lockutils [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.167 239853 DEBUG oslo_concurrency.lockutils [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.167 239853 DEBUG nova.compute.manager [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] No waiting events found dispatching network-vif-unplugged-30823208-1ce7-439a-ae72-2f638b600a83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.167 239853 DEBUG nova.compute.manager [req-7a7003d8-addc-488b-a2c9-b5df2cc01280 req-1aec0828-b382-4d0e-9c3a-c267f69ea37e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-unplugged-30823208-1ce7-439a-ae72-2f638b600a83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.259 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Successfully updated port: fc3773bb-1860-499c-bf29-6578112f08fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.276 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.276 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquired lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.277 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.360 239853 DEBUG nova.compute.manager [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-changed-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.361 239853 DEBUG nova.compute.manager [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Refreshing instance network info cache due to event network-changed-fc3773bb-1860-499c-bf29-6578112f08fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.361 239853 DEBUG oslo_concurrency.lockutils [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.659 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.693 239853 DEBUG nova.network.neutron [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.713 239853 INFO nova.compute.manager [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Took 1.27 seconds to deallocate network for instance.#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.758 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.758 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:12 np0005605476 nova_compute[239846]: 2026-02-02 17:47:12.830 239853 DEBUG oslo_concurrency.processutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb  2 12:47:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb  2 12:47:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb  2 12:47:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 178 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 589 KiB/s wr, 48 op/s
Feb  2 12:47:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.391 239853 DEBUG nova.network.neutron [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating instance_info_cache with network_info: [{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51631428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.410 239853 DEBUG oslo_concurrency.processutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.412 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Releasing lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.412 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Instance network_info: |[{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.412 239853 DEBUG oslo_concurrency.lockutils [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.413 239853 DEBUG nova.network.neutron [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Refreshing network info cache for port fc3773bb-1860-499c-bf29-6578112f08fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.416 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Start _get_guest_xml network_info=[{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.421 239853 DEBUG nova.compute.provider_tree [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.424 239853 WARNING nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.430 239853 DEBUG nova.virt.libvirt.host [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.430 239853 DEBUG nova.virt.libvirt.host [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.433 239853 DEBUG nova.virt.libvirt.host [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.434 239853 DEBUG nova.virt.libvirt.host [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.434 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.434 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.435 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.435 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.435 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.435 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.435 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.436 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.436 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.436 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.436 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.436 239853 DEBUG nova.virt.hardware [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.440 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.465 239853 DEBUG nova.scheduler.client.report [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.496 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.523 239853 INFO nova.scheduler.client.report [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Deleted allocations for instance bce42bcf-3dfb-42dd-ac7b-84302fd0d448#033[00m
Feb  2 12:47:13 np0005605476 nova_compute[239846]: 2026-02-02 17:47:13.591 239853 DEBUG oslo_concurrency.lockutils [None req-74197d23-b389-4fa5-8985-f25e6c4c3640 5d5e768af5c3478281bf15a0608b56c8 245d00a049914eb4a92746d5f02785db - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665939072' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.008 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.028 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.031 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.243 239853 DEBUG nova.compute.manager [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.243 239853 DEBUG oslo_concurrency.lockutils [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.244 239853 DEBUG oslo_concurrency.lockutils [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.245 239853 DEBUG oslo_concurrency.lockutils [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "bce42bcf-3dfb-42dd-ac7b-84302fd0d448-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.245 239853 DEBUG nova.compute.manager [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] No waiting events found dispatching network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.245 239853 WARNING nova.compute.manager [req-e6fe54df-08d3-4533-aae8-3350b4a682ea req-10464924-1595-4ca2-9ca5-d9552c961f44 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received unexpected event network-vif-plugged-30823208-1ce7-439a-ae72-2f638b600a83 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.446 239853 DEBUG nova.compute.manager [req-26382830-3b3f-4b46-b2e8-f5e26e5b5666 req-1d43049a-48cb-48d4-a3f0-834049ed47f8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Received event network-vif-deleted-30823208-1ce7-439a-ae72-2f638b600a83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.494 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453563115' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.575 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.576 239853 DEBUG nova.virt.libvirt.vif [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:47:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-757279055',display_name='tempest-VolumesBackupsTest-instance-757279055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-757279055',id=5,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU3veD1g9HVJZ9AHTjf21AV0DAVUx0hFYv++zIuPvdsqxOgtTUjkhiaYTKtBFWr+h95LbuQUEFFChqq3nJ6w8Nr133wUa+Pz23AGcPQK1FOxXN5HUZGuv84uyBccJYyAw==',key_name='tempest-keypair-1486220055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-dsopqzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:47:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8336f054-b9e7-4211-9438-7a161c0fbbdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.576 239853 DEBUG nova.network.os_vif_util [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.577 239853 DEBUG nova.network.os_vif_util [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.578 239853 DEBUG nova.objects.instance [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.594 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <uuid>8336f054-b9e7-4211-9438-7a161c0fbbdd</uuid>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <name>instance-00000005</name>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesBackupsTest-instance-757279055</nova:name>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:47:13</nova:creationTime>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:user uuid="7b2b7987477543268373aac3ffda0c37">tempest-VolumesBackupsTest-27790021-project-member</nova:user>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:project uuid="7ff6dfb8be334eeb94d13588a609b2bd">tempest-VolumesBackupsTest-27790021</nova:project>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <nova:port uuid="fc3773bb-1860-499c-bf29-6578112f08fa">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="serial">8336f054-b9e7-4211-9438-7a161c0fbbdd</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="uuid">8336f054-b9e7-4211-9438-7a161c0fbbdd</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/8336f054-b9e7-4211-9438-7a161c0fbbdd_disk">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:b2:5f:04"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <target dev="tapfc3773bb-18"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/console.log" append="off"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:47:14 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:47:14 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:47:14 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:47:14 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.594 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Preparing to wait for external event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.595 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.595 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.595 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.596 239853 DEBUG nova.virt.libvirt.vif [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:47:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-757279055',display_name='tempest-VolumesBackupsTest-instance-757279055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-757279055',id=5,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU3veD1g9HVJZ9AHTjf21AV0DAVUx0hFYv++zIuPvdsqxOgtTUjkhiaYTKtBFWr+h95LbuQUEFFChqq3nJ6w8Nr133wUa+Pz23AGcPQK1FOxXN5HUZGuv84uyBccJYyAw==',key_name='tempest-keypair-1486220055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-dsopqzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:47:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8336f054-b9e7-4211-9438-7a161c0fbbdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.596 239853 DEBUG nova.network.os_vif_util [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.596 239853 DEBUG nova.network.os_vif_util [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.597 239853 DEBUG os_vif [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.597 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.598 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.598 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.600 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.600 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc3773bb-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.601 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc3773bb-18, col_values=(('external_ids', {'iface-id': 'fc3773bb-1860-499c-bf29-6578112f08fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:5f:04', 'vm-uuid': '8336f054-b9e7-4211-9438-7a161c0fbbdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.603 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:14 np0005605476 NetworkManager[49022]: <info>  [1770054434.6041] manager: (tapfc3773bb-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.605 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.608 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.609 239853 INFO os_vif [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18')#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.659 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.659 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.660 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No VIF found with MAC fa:16:3e:b2:5f:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.660 239853 INFO nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Using config drive#033[00m
Feb  2 12:47:14 np0005605476 nova_compute[239846]: 2026-02-02 17:47:14.681 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb  2 12:47:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb  2 12:47:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 166 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.392 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.394 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.412 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.421 239853 INFO nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Creating config drive at /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.429 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyx1wc68m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.519 239853 DEBUG nova.network.neutron [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updated VIF entry in instance network info cache for port fc3773bb-1860-499c-bf29-6578112f08fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.520 239853 DEBUG nova.network.neutron [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating instance_info_cache with network_info: [{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.543 239853 DEBUG oslo_concurrency.lockutils [req-639e4c7a-a58d-4a18-9e86-faf83f40d8f1 req-c83ca997-48b9-4e29-8513-a07f9da89941 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.558 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyx1wc68m" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.598 239853 DEBUG nova.storage.rbd_utils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.604 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.724 239853 DEBUG oslo_concurrency.processutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config 8336f054-b9e7-4211-9438-7a161c0fbbdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.726 239853 INFO nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Deleting local config drive /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd/disk.config because it was imported into RBD.#033[00m
Feb  2 12:47:15 np0005605476 kernel: tapfc3773bb-18: entered promiscuous mode
Feb  2 12:47:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:15Z|00070|binding|INFO|Claiming lport fc3773bb-1860-499c-bf29-6578112f08fa for this chassis.
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.770 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:15Z|00071|binding|INFO|fc3773bb-1860-499c-bf29-6578112f08fa: Claiming fa:16:3e:b2:5f:04 10.100.0.6
Feb  2 12:47:15 np0005605476 NetworkManager[49022]: <info>  [1770054435.7720] manager: (tapfc3773bb-18): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.783 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5f:04 10.100.0.6'], port_security=['fa:16:3e:b2:5f:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8336f054-b9e7-4211-9438-7a161c0fbbdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962ccc49-6579-46f5-b577-7995d4fef976', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '906e86ba-337b-4496-95bc-d6c4661010f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58e5e8fa-47da-4a70-b729-f06398e2ea5a, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=fc3773bb-1860-499c-bf29-6578112f08fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.784 155391 INFO neutron.agent.ovn.metadata.agent [-] Port fc3773bb-1860-499c-bf29-6578112f08fa in datapath 962ccc49-6579-46f5-b577-7995d4fef976 bound to our chassis#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.786 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962ccc49-6579-46f5-b577-7995d4fef976#033[00m
Feb  2 12:47:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:15Z|00072|binding|INFO|Setting lport fc3773bb-1860-499c-bf29-6578112f08fa ovn-installed in OVS
Feb  2 12:47:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:15Z|00073|binding|INFO|Setting lport fc3773bb-1860-499c-bf29-6578112f08fa up in Southbound
Feb  2 12:47:15 np0005605476 nova_compute[239846]: 2026-02-02 17:47:15.794 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.798 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d13738-f176-4c10-a6ae-5a4c58517b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.799 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962ccc49-61 in ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:47:15 np0005605476 systemd-udevd[250324]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.801 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962ccc49-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.801 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccbcfe5-c3a7-4340-ba2f-85fc0d324cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.803 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8671e331-9abb-4155-a417-056f0225996e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 systemd-machined[208080]: New machine qemu-5-instance-00000005.
Feb  2 12:47:15 np0005605476 NetworkManager[49022]: <info>  [1770054435.8146] device (tapfc3773bb-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:47:15 np0005605476 NetworkManager[49022]: <info>  [1770054435.8156] device (tapfc3773bb-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.817 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[8338621d-65ba-464a-9db7-1ac21aa687a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.845 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e027df03-40a0-42eb-82bb-9b4aaa2a922f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.874 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1af6bfcd-dc10-4d3c-89db-f97eefeaadf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 NetworkManager[49022]: <info>  [1770054435.8822] manager: (tap962ccc49-60): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.881 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a902d3ca-a76c-4939-ac7e-ca5001791eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 systemd-udevd[250328]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.916 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[695d7be5-2199-411c-9635-d2db71ad074f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.919 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f4274375-6d8d-46f1-aa8c-2e0110a86b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 NetworkManager[49022]: <info>  [1770054435.9382] device (tap962ccc49-60): carrier: link connected
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.943 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d214bdc7-16e5-47c7-906b-bb40f30a9613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.964 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[abb8c770-d622-4861-afff-4ca292b9ce00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962ccc49-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:57:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367726, 'reachable_time': 36247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250357, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:15.981 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9720bf26-8453-42f7-875f-f383f8392ea3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:5785'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 367726, 'tstamp': 367726}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250358, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.007 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a137f917-26c8-4c30-870d-cd5287a34443]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962ccc49-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:57:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367726, 'reachable_time': 36247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250359, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.046 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3f9a55-4842-4ad9-95aa-7f90083d3e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.112 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e059cb-0bb8-4762-a737-54479f5b2b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.114 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962ccc49-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.114 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.115 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962ccc49-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:16 np0005605476 kernel: tap962ccc49-60: entered promiscuous mode
Feb  2 12:47:16 np0005605476 NetworkManager[49022]: <info>  [1770054436.1183] manager: (tap962ccc49-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.121 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962ccc49-60, col_values=(('external_ids', {'iface-id': '7ef9b558-600a-49d5-9b00-0242ee1bfb90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.117 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.120 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.122 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.124 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.124 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.125 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c320f970-4321-463d-b0b8-a694bd9f950d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.127 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-962ccc49-6579-46f5-b577-7995d4fef976
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:47:16 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:16Z|00074|binding|INFO|Releasing lport 7ef9b558-600a-49d5-9b00-0242ee1bfb90 from this chassis (sb_readonly=0)
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 962ccc49-6579-46f5-b577-7995d4fef976
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:47:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:16.128 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'env', 'PROCESS_TAG=haproxy-962ccc49-6579-46f5-b577-7995d4fef976', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962ccc49-6579-46f5-b577-7995d4fef976.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.135 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.394 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054436.3938882, 8336f054-b9e7-4211-9438-7a161c0fbbdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.396 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] VM Started (Lifecycle Event)#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.414 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.420 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054436.3945363, 8336f054-b9e7-4211-9438-7a161c0fbbdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.420 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.442 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.445 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.463 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:47:16 np0005605476 podman[250433]: 2026-02-02 17:47:16.473361141 +0000 UTC m=+0.053704538 container create b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:47:16 np0005605476 systemd[1]: Started libpod-conmon-b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb.scope.
Feb  2 12:47:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.515 239853 DEBUG nova.compute.manager [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.516 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.517 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.517 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.517 239853 DEBUG nova.compute.manager [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Processing event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.518 239853 DEBUG nova.compute.manager [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.518 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaf58e2265b31029ed7332f3bcc8188c55dde2f3c2ef8175d26d1be59a6d6b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.519 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.519 239853 DEBUG oslo_concurrency.lockutils [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.519 239853 DEBUG nova.compute.manager [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] No waiting events found dispatching network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.520 239853 WARNING nova.compute.manager [req-24d3af31-ecd3-43be-a4ed-a28915a41b3f req-3374e386-1b12-41ff-897c-5ad1811adfa8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received unexpected event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.520 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.529 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054436.5288515, 8336f054-b9e7-4211-9438-7a161c0fbbdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.529 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.531 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:47:16 np0005605476 podman[250433]: 2026-02-02 17:47:16.532866608 +0000 UTC m=+0.113210035 container init b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.535 239853 INFO nova.virt.libvirt.driver [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Instance spawned successfully.#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.536 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:47:16 np0005605476 podman[250433]: 2026-02-02 17:47:16.537337931 +0000 UTC m=+0.117681338 container start b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:47:16 np0005605476 podman[250433]: 2026-02-02 17:47:16.453406552 +0000 UTC m=+0.033749969 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.550 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.553 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:47:16 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [NOTICE]   (250452) : New worker (250454) forked
Feb  2 12:47:16 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [NOTICE]   (250452) : Loading success.
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.563 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.563 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.564 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.565 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.566 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.567 239853 DEBUG nova.virt.libvirt.driver [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.574 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.624 239853 INFO nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.625 239853 DEBUG nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.693 239853 INFO nova.compute.manager [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Took 7.64 seconds to build instance.#033[00m
Feb  2 12:47:16 np0005605476 nova_compute[239846]: 2026-02-02 17:47:16.710 239853 DEBUG oslo_concurrency.lockutils [None req-6ca716e0-52be-438d-b6f1-f120e5511382 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560927780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:47:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560927780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:47:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:47:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/738329969' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:47:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:47:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/738329969' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:47:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 MiB/s wr, 160 op/s
Feb  2 12:47:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 274 op/s
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.496 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.540 239853 DEBUG nova.compute.manager [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-changed-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.541 239853 DEBUG nova.compute.manager [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Refreshing instance network info cache due to event network-changed-fc3773bb-1860-499c-bf29-6578112f08fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.541 239853 DEBUG oslo_concurrency.lockutils [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.541 239853 DEBUG oslo_concurrency.lockutils [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.541 239853 DEBUG nova.network.neutron [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Refreshing network info cache for port fc3773bb-1860-499c-bf29-6578112f08fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:47:19 np0005605476 nova_compute[239846]: 2026-02-02 17:47:19.603 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb  2 12:47:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb  2 12:47:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb  2 12:47:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:20Z|00075|binding|INFO|Releasing lport 7ef9b558-600a-49d5-9b00-0242ee1bfb90 from this chassis (sb_readonly=0)
Feb  2 12:47:20 np0005605476 nova_compute[239846]: 2026-02-02 17:47:20.582 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:20 np0005605476 nova_compute[239846]: 2026-02-02 17:47:20.926 239853 DEBUG nova.network.neutron [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updated VIF entry in instance network info cache for port fc3773bb-1860-499c-bf29-6578112f08fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:47:20 np0005605476 nova_compute[239846]: 2026-02-02 17:47:20.927 239853 DEBUG nova.network.neutron [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating instance_info_cache with network_info: [{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:20 np0005605476 nova_compute[239846]: 2026-02-02 17:47:20.960 239853 DEBUG oslo_concurrency.lockutils [req-c6800b94-ed74-4bf7-9914-b8538852d16d req-806e3472-77d8-4faf-8938-40bc8677fc55 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:47:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 233 op/s
Feb  2 12:47:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 186 op/s
Feb  2 12:47:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:24.397 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:24 np0005605476 nova_compute[239846]: 2026-02-02 17:47:24.499 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:24 np0005605476 nova_compute[239846]: 2026-02-02 17:47:24.605 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb  2 12:47:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb  2 12:47:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb  2 12:47:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 175 op/s
Feb  2 12:47:26 np0005605476 nova_compute[239846]: 2026-02-02 17:47:26.076 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054431.0749567, bce42bcf-3dfb-42dd-ac7b-84302fd0d448 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:47:26 np0005605476 nova_compute[239846]: 2026-02-02 17:47:26.076 239853 INFO nova.compute.manager [-] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:47:26 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:26Z|00076|binding|INFO|Releasing lport 7ef9b558-600a-49d5-9b00-0242ee1bfb90 from this chassis (sb_readonly=0)
Feb  2 12:47:26 np0005605476 nova_compute[239846]: 2026-02-02 17:47:26.104 239853 DEBUG nova.compute.manager [None req-1906dea6-7430-478f-81e4-580dea872a74 - - - - - -] [instance: bce42bcf-3dfb-42dd-ac7b-84302fd0d448] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:47:26 np0005605476 nova_compute[239846]: 2026-02-02 17:47:26.181 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:27Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:5f:04 10.100.0.6
Feb  2 12:47:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:27Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:5f:04 10.100.0.6
Feb  2 12:47:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 76 op/s
Feb  2 12:47:27 np0005605476 nova_compute[239846]: 2026-02-02 17:47:27.666 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:28 np0005605476 nova_compute[239846]: 2026-02-02 17:47:28.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:47:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 144 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 889 KiB/s wr, 36 op/s
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.500 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.577 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.577 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.577 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.577 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:29 np0005605476 nova_compute[239846]: 2026-02-02 17:47:29.606 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:30 np0005605476 nova_compute[239846]: 2026-02-02 17:47:30.422 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.160 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating instance_info_cache with network_info: [{"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.177 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-8336f054-b9e7-4211-9438-7a161c0fbbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.178 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 12:47:31 np0005605476 nova_compute[239846]: 2026-02-02 17:47:31.265 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 12:47:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Feb  2 12:47:32 np0005605476 nova_compute[239846]: 2026-02-02 17:47:32.259 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:32 np0005605476 nova_compute[239846]: 2026-02-02 17:47:32.260 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:32 np0005605476 nova_compute[239846]: 2026-02-02 17:47:32.297 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.275 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.275 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.275 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.276 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.276 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Feb  2 12:47:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:47:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438862941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.771 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.942 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:47:33 np0005605476 nova_compute[239846]: 2026-02-02 17:47:33.943 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.064 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.065 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=59.94280376750976GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.065 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.066 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.320 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 8336f054-b9e7-4211-9438-7a161c0fbbdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.321 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.322 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.441 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.502 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.646 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:34 np0005605476 nova_compute[239846]: 2026-02-02 17:47:34.987 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:47:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784492080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.017 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.022 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.041 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.113 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.114 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.231 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.232 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.289 239853 DEBUG nova.objects.instance [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.291 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.313 239853 INFO nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.328 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.529 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.530 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.530 239853 INFO nova.compute.manager [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Attaching volume 75d9036a-9c8b-43cd-8ee9-ec4d5e57992d to /dev/vdb#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.659 239853 DEBUG os_brick.utils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.660 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.670 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.670 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[17634026-6081-4cb7-9df1-8a0e2ee8e592]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.672 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.679 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.680 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[f87c9fbc-a574-4e6a-a5ff-1466ec8d7617]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.681 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.687 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.688 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[964f61c2-9cb1-48e5-9443-f0e087e019d2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.689 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4ce824-2539-4d86-aca1-05a9b820c778]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.690 239853 DEBUG oslo_concurrency.processutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.705 239853 DEBUG oslo_concurrency.processutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.708 239853 DEBUG os_brick.initiator.connectors.lightos [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.708 239853 DEBUG os_brick.initiator.connectors.lightos [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.708 239853 DEBUG os_brick.initiator.connectors.lightos [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.709 239853 DEBUG os_brick.utils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:47:35 np0005605476 nova_compute[239846]: 2026-02-02 17:47:35.709 239853 DEBUG nova.virt.block_device [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating existing volume attachment record: eaf46c3f-f3f9-48c3-9c2c-dba945d577f5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.315 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.315 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.316 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:47:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2948084785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.497 239853 DEBUG nova.objects.instance [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.538 239853 DEBUG nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Attempting to attach volume 75d9036a-9c8b-43cd-8ee9-ec4d5e57992d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.540 239853 DEBUG nova.virt.libvirt.guest [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-75d9036a-9c8b-43cd-8ee9-ec4d5e57992d">
Feb  2 12:47:36 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:47:36 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:36 np0005605476 nova_compute[239846]:  <serial>75d9036a-9c8b-43cd-8ee9-ec4d5e57992d</serial>
Feb  2 12:47:36 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:36 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.655 239853 DEBUG nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.655 239853 DEBUG nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.656 239853 DEBUG nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.656 239853 DEBUG nova.virt.libvirt.driver [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No VIF found with MAC fa:16:3e:b2:5f:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.706 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:47:36
Feb  2 12:47:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:47:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:47:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb  2 12:47:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:47:36 np0005605476 nova_compute[239846]: 2026-02-02 17:47:36.884 239853 DEBUG oslo_concurrency.lockutils [None req-28370561-f605-48bd-853b-f8a4503948a9 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:47:37 np0005605476 podman[250536]: 2026-02-02 17:47:37.61785067 +0000 UTC m=+0.059538909 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:47:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:47:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3768326342' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 12:47:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb  2 12:47:39 np0005605476 nova_compute[239846]: 2026-02-02 17:47:39.504 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb  2 12:47:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb  2 12:47:39 np0005605476 nova_compute[239846]: 2026-02-02 17:47:39.648 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:40 np0005605476 nova_compute[239846]: 2026-02-02 17:47:40.533 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb  2 12:47:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb  2 12:47:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb  2 12:47:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 15 op/s
Feb  2 12:47:41 np0005605476 nova_compute[239846]: 2026-02-02 17:47:41.478 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:41 np0005605476 podman[250557]: 2026-02-02 17:47:41.648016536 +0000 UTC m=+0.090804499 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb  2 12:47:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 18 KiB/s wr, 14 op/s
Feb  2 12:47:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb  2 12:47:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb  2 12:47:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb  2 12:47:44 np0005605476 nova_compute[239846]: 2026-02-02 17:47:44.511 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:44 np0005605476 nova_compute[239846]: 2026-02-02 17:47:44.650 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 824 KiB/s rd, 7.2 KiB/s wr, 89 op/s
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.511 239853 DEBUG oslo_concurrency.lockutils [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.512 239853 DEBUG oslo_concurrency.lockutils [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.561 239853 INFO nova.compute.manager [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Detaching volume 75d9036a-9c8b-43cd-8ee9-ec4d5e57992d#033[00m
Feb  2 12:47:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3184672077' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.712 239853 INFO nova.virt.block_device [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Attempting to driver detach volume 75d9036a-9c8b-43cd-8ee9-ec4d5e57992d from mountpoint /dev/vdb#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.719 239853 DEBUG nova.virt.libvirt.driver [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Attempting to detach device vdb from instance 8336f054-b9e7-4211-9438-7a161c0fbbdd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.720 239853 DEBUG nova.virt.libvirt.guest [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-75d9036a-9c8b-43cd-8ee9-ec4d5e57992d">
Feb  2 12:47:45 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <serial>75d9036a-9c8b-43cd-8ee9-ec4d5e57992d</serial>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:45 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.727 239853 INFO nova.virt.libvirt.driver [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully detached device vdb from instance 8336f054-b9e7-4211-9438-7a161c0fbbdd from the persistent domain config.#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.727 239853 DEBUG nova.virt.libvirt.driver [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8336f054-b9e7-4211-9438-7a161c0fbbdd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.728 239853 DEBUG nova.virt.libvirt.guest [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-75d9036a-9c8b-43cd-8ee9-ec4d5e57992d">
Feb  2 12:47:45 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <serial>75d9036a-9c8b-43cd-8ee9-ec4d5e57992d</serial>
Feb  2 12:47:45 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:47:45 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:47:45 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.829 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054465.8295803, 8336f054-b9e7-4211-9438-7a161c0fbbdd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.830 239853 DEBUG nova.virt.libvirt.driver [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8336f054-b9e7-4211-9438-7a161c0fbbdd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:47:45 np0005605476 nova_compute[239846]: 2026-02-02 17:47:45.833 239853 INFO nova.virt.libvirt.driver [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully detached device vdb from instance 8336f054-b9e7-4211-9438-7a161c0fbbdd from the live domain config.#033[00m
Feb  2 12:47:46 np0005605476 nova_compute[239846]: 2026-02-02 17:47:46.010 239853 DEBUG nova.objects.instance [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:46 np0005605476 nova_compute[239846]: 2026-02-02 17:47:46.051 239853 DEBUG oslo_concurrency.lockutils [None req-f2fb3cf8-6250-4320-b23b-03c26208072b 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:46.638 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:46.638 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:46.639 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.213 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.213 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.214 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.214 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.214 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.215 239853 INFO nova.compute.manager [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Terminating instance#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.216 239853 DEBUG nova.compute.manager [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:47:47 np0005605476 kernel: tapfc3773bb-18 (unregistering): left promiscuous mode
Feb  2 12:47:47 np0005605476 NetworkManager[49022]: <info>  [1770054467.3598] device (tapfc3773bb-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:47:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:47Z|00077|binding|INFO|Releasing lport fc3773bb-1860-499c-bf29-6578112f08fa from this chassis (sb_readonly=0)
Feb  2 12:47:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:47Z|00078|binding|INFO|Setting lport fc3773bb-1860-499c-bf29-6578112f08fa down in Southbound
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.365 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:47:47Z|00079|binding|INFO|Removing iface tapfc3773bb-18 ovn-installed in OVS
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.367 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.375 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 5.7 KiB/s wr, 74 op/s
Feb  2 12:47:47 np0005605476 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb  2 12:47:47 np0005605476 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 12.614s CPU time.
Feb  2 12:47:47 np0005605476 systemd-machined[208080]: Machine qemu-5-instance-00000005 terminated.
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.428 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5f:04 10.100.0.6'], port_security=['fa:16:3e:b2:5f:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8336f054-b9e7-4211-9438-7a161c0fbbdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962ccc49-6579-46f5-b577-7995d4fef976', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '906e86ba-337b-4496-95bc-d6c4661010f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58e5e8fa-47da-4a70-b729-f06398e2ea5a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=fc3773bb-1860-499c-bf29-6578112f08fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.430 155391 INFO neutron.agent.ovn.metadata.agent [-] Port fc3773bb-1860-499c-bf29-6578112f08fa in datapath 962ccc49-6579-46f5-b577-7995d4fef976 unbound from our chassis#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.431 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962ccc49-6579-46f5-b577-7995d4fef976, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.432 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[29d1b25d-0bdf-40b5-8c41-1a28a6a43c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.433 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 namespace which is not needed anymore#033[00m
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.444 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.448 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007607537243696459 of space, bias 1.0, pg target 0.2282261173108938 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003501379203461045 of space, bias 1.0, pg target 0.10504137610383135 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.174602979762003e-07 of space, bias 1.0, pg target 0.00021523808939286009 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659444036556422 of space, bias 1.0, pg target 0.19978332109669267 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0183517794873927e-06 of space, bias 4.0, pg target 0.0012220221353848712 quantized to 16 (current 16)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:47:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.457 239853 INFO nova.virt.libvirt.driver [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Instance destroyed successfully.#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.457 239853 DEBUG nova.objects.instance [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'resources' on Instance uuid 8336f054-b9e7-4211-9438-7a161c0fbbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.474 239853 DEBUG nova.virt.libvirt.vif [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:47:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-757279055',display_name='tempest-VolumesBackupsTest-instance-757279055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-757279055',id=5,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU3veD1g9HVJZ9AHTjf21AV0DAVUx0hFYv++zIuPvdsqxOgtTUjkhiaYTKtBFWr+h95LbuQUEFFChqq3nJ6w8Nr133wUa+Pz23AGcPQK1FOxXN5HUZGuv84uyBccJYyAw==',key_name='tempest-keypair-1486220055',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:47:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-dsopqzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:47:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8336f054-b9e7-4211-9438-7a161c0fbbdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.475 239853 DEBUG nova.network.os_vif_util [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "fc3773bb-1860-499c-bf29-6578112f08fa", "address": "fa:16:3e:b2:5f:04", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3773bb-18", "ovs_interfaceid": "fc3773bb-1860-499c-bf29-6578112f08fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.475 239853 DEBUG nova.network.os_vif_util [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.475 239853 DEBUG os_vif [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.477 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.477 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc3773bb-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.478 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.479 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.482 239853 INFO os_vif [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5f:04,bridge_name='br-int',has_traffic_filtering=True,id=fc3773bb-1860-499c-bf29-6578112f08fa,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3773bb-18')#033[00m
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [NOTICE]   (250452) : haproxy version is 2.8.14-c23fe91
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [NOTICE]   (250452) : path to executable is /usr/sbin/haproxy
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [WARNING]  (250452) : Exiting Master process...
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [WARNING]  (250452) : Exiting Master process...
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [ALERT]    (250452) : Current worker (250454) exited with code 143 (Terminated)
Feb  2 12:47:47 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[250448]: [WARNING]  (250452) : All workers exited. Exiting... (0)
Feb  2 12:47:47 np0005605476 systemd[1]: libpod-b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb.scope: Deactivated successfully.
Feb  2 12:47:47 np0005605476 podman[250637]: 2026-02-02 17:47:47.628592069 +0000 UTC m=+0.107326933 container died b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb  2 12:47:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb-userdata-shm.mount: Deactivated successfully.
Feb  2 12:47:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ceaf58e2265b31029ed7332f3bcc8188c55dde2f3c2ef8175d26d1be59a6d6b7-merged.mount: Deactivated successfully.
Feb  2 12:47:47 np0005605476 podman[250637]: 2026-02-02 17:47:47.750530534 +0000 UTC m=+0.229265378 container cleanup b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:47:47 np0005605476 systemd[1]: libpod-conmon-b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb.scope: Deactivated successfully.
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.820 239853 DEBUG nova.compute.manager [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-unplugged-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.821 239853 DEBUG oslo_concurrency.lockutils [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.822 239853 DEBUG oslo_concurrency.lockutils [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.822 239853 DEBUG oslo_concurrency.lockutils [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.822 239853 DEBUG nova.compute.manager [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] No waiting events found dispatching network-vif-unplugged-fc3773bb-1860-499c-bf29-6578112f08fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.823 239853 DEBUG nova.compute.manager [req-98021f53-cb43-4805-9ef6-a8177c28ce08 req-2e219855-5256-4c68-9e5b-88a5d5220487 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-unplugged-fc3773bb-1860-499c-bf29-6578112f08fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:47:47 np0005605476 podman[250667]: 2026-02-02 17:47:47.850217457 +0000 UTC m=+0.083173249 container remove b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.855 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a103ac2-fc5e-4833-af92-030a2df70e3c]: (4, ('Mon Feb  2 05:47:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 (b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb)\nb604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb\nMon Feb  2 05:47:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 (b604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb)\nb604a387a3bc3e75ec683e676c81774d58e08c767afee726a1f600aacbd635bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.857 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d84d4726-61f8-4601-9aec-177518b2ae1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.858 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962ccc49-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:47:47 np0005605476 kernel: tap962ccc49-60: left promiscuous mode
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.860 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 nova_compute[239846]: 2026-02-02 17:47:47.866 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.869 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[af750412-b33b-4402-bd08-66dbff9c8321]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.888 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69421085-7852-4cdc-8a4d-008e2f07786a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.889 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e6a1c8-0703-45a6-a518-cba0c3c2342f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.906 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a46cad4c-d133-4af2-bd36-43705a54fb56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367720, 'reachable_time': 25810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250684, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:47 np0005605476 systemd[1]: run-netns-ovnmeta\x2d962ccc49\x2d6579\x2d46f5\x2db577\x2d7995d4fef976.mount: Deactivated successfully.
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.912 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:47:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:47:47.913 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[d1814d4e-0447-42fb-8fa4-42c698bc3f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.506 239853 INFO nova.virt.libvirt.driver [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Deleting instance files /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd_del#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.507 239853 INFO nova.virt.libvirt.driver [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Deletion of /var/lib/nova/instances/8336f054-b9e7-4211-9438-7a161c0fbbdd_del complete#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.567 239853 INFO nova.compute.manager [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Took 1.35 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.568 239853 DEBUG oslo.service.loopingcall [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.568 239853 DEBUG nova.compute.manager [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:47:48 np0005605476 nova_compute[239846]: 2026-02-02 17:47:48.569 239853 DEBUG nova.network.neutron [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:47:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 274 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 MiB/s wr, 206 op/s
Feb  2 12:47:49 np0005605476 nova_compute[239846]: 2026-02-02 17:47:49.508 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb  2 12:47:49 np0005605476 nova_compute[239846]: 2026-02-02 17:47:49.918 239853 DEBUG nova.network.neutron [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:49.999 239853 DEBUG nova.compute.manager [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.000 239853 DEBUG oslo_concurrency.lockutils [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.000 239853 DEBUG oslo_concurrency.lockutils [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.000 239853 DEBUG oslo_concurrency.lockutils [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.000 239853 DEBUG nova.compute.manager [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] No waiting events found dispatching network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.000 239853 WARNING nova.compute.manager [req-06cf1f3d-ffee-4af4-b4c6-7816cc28b7f6 req-b85aed70-699c-469e-a413-2770ca9a3046 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received unexpected event network-vif-plugged-fc3773bb-1860-499c-bf29-6578112f08fa for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:47:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.095 239853 INFO nova.compute.manager [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Took 1.53 seconds to deallocate network for instance.#033[00m
Feb  2 12:47:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.314 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.315 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.378 239853 DEBUG oslo_concurrency.processutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:47:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:47:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142384757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.940 239853 DEBUG oslo_concurrency.processutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:47:50 np0005605476 nova_compute[239846]: 2026-02-02 17:47:50.946 239853 DEBUG nova.compute.provider_tree [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:47:51 np0005605476 nova_compute[239846]: 2026-02-02 17:47:51.006 239853 DEBUG nova.scheduler.client.report [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:47:51 np0005605476 nova_compute[239846]: 2026-02-02 17:47:51.072 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:51 np0005605476 nova_compute[239846]: 2026-02-02 17:47:51.167 239853 INFO nova.scheduler.client.report [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Deleted allocations for instance 8336f054-b9e7-4211-9438-7a161c0fbbdd#033[00m
Feb  2 12:47:51 np0005605476 nova_compute[239846]: 2026-02-02 17:47:51.342 239853 DEBUG oslo_concurrency.lockutils [None req-5e4807e8-5432-46ec-a94b-5b00e9fbe981 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8336f054-b9e7-4211-9438-7a161c0fbbdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:47:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 642 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 67 MiB/s wr, 291 op/s
Feb  2 12:47:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2110037928' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:52 np0005605476 nova_compute[239846]: 2026-02-02 17:47:52.114 239853 DEBUG nova.compute.manager [req-48978108-ae9c-4252-9d8b-76f67056143f req-eeb82fb2-8911-423c-aed3-b32fefec3583 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Received event network-vif-deleted-fc3773bb-1860-499c-bf29-6578112f08fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:47:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb  2 12:47:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb  2 12:47:52 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb  2 12:47:52 np0005605476 nova_compute[239846]: 2026-02-02 17:47:52.480 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 642 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 67 MiB/s wr, 237 op/s
Feb  2 12:47:54 np0005605476 nova_compute[239846]: 2026-02-02 17:47:54.508 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.633760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474633794, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 253, "total_data_size": 1930021, "memory_usage": 1964576, "flush_reason": "Manual Compaction"}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474667206, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1906343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20004, "largest_seqno": 21440, "table_properties": {"data_size": 1899474, "index_size": 3944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15274, "raw_average_key_size": 20, "raw_value_size": 1885359, "raw_average_value_size": 2554, "num_data_blocks": 175, "num_entries": 738, "num_filter_entries": 738, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054375, "oldest_key_time": 1770054375, "file_creation_time": 1770054474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 33511 microseconds, and 4899 cpu microseconds.
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.667264) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1906343 bytes OK
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.667290) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.675255) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.675337) EVENT_LOG_v1 {"time_micros": 1770054474675325, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.675374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1923477, prev total WAL file size 1923477, number of live WAL files 2.
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.676417) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1861KB)], [47(7104KB)]
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474676499, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9181054, "oldest_snapshot_seqno": -1}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4616 keys, 7430751 bytes, temperature: kUnknown
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474727027, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7430751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7398335, "index_size": 19762, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 114686, "raw_average_key_size": 24, "raw_value_size": 7313422, "raw_average_value_size": 1584, "num_data_blocks": 817, "num_entries": 4616, "num_filter_entries": 4616, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.727302) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7430751 bytes
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.732004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.3 rd, 146.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 5137, records dropped: 521 output_compression: NoCompression
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.732265) EVENT_LOG_v1 {"time_micros": 1770054474732030, "job": 24, "event": "compaction_finished", "compaction_time_micros": 50627, "compaction_time_cpu_micros": 17894, "output_level": 6, "num_output_files": 1, "total_output_size": 7430751, "num_input_records": 5137, "num_output_records": 4616, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474732632, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054474733641, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.676284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.733699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.733704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.733706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.733708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:47:54.733710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:47:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:47:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 934 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 112 MiB/s wr, 295 op/s
Feb  2 12:47:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:47:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479383421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:47:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:47:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479383421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:47:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 89 MiB/s wr, 262 op/s
Feb  2 12:47:57 np0005605476 nova_compute[239846]: 2026-02-02 17:47:57.483 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:47:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2400864883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:47:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb  2 12:47:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb  2 12:47:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb  2 12:47:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 904 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 73 MiB/s wr, 352 op/s
Feb  2 12:47:59 np0005605476 nova_compute[239846]: 2026-02-02 17:47:59.510 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:47:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 226 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 69 MiB/s wr, 409 op/s
Feb  2 12:48:02 np0005605476 nova_compute[239846]: 2026-02-02 17:48:02.455 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054467.454263, 8336f054-b9e7-4211-9438-7a161c0fbbdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:02 np0005605476 nova_compute[239846]: 2026-02-02 17:48:02.456 239853 INFO nova.compute.manager [-] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:48:02 np0005605476 nova_compute[239846]: 2026-02-02 17:48:02.481 239853 DEBUG nova.compute.manager [None req-e8ba5758-f649-4808-b613-f536e8352fcb - - - - - -] [instance: 8336f054-b9e7-4211-9438-7a161c0fbbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:02 np0005605476 nova_compute[239846]: 2026-02-02 17:48:02.486 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 226 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 52 MiB/s wr, 249 op/s
Feb  2 12:48:04 np0005605476 nova_compute[239846]: 2026-02-02 17:48:04.562 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303550611' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303550611' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb  2 12:48:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb  2 12:48:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb  2 12:48:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 252 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 14 MiB/s wr, 257 op/s
Feb  2 12:48:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2166563203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:06 np0005605476 nova_compute[239846]: 2026-02-02 17:48:06.832 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:06 np0005605476 nova_compute[239846]: 2026-02-02 17:48:06.832 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.020 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.182 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.183 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.191 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.191 239853 INFO nova.compute.claims [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:48:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 273 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 14 MiB/s wr, 250 op/s
Feb  2 12:48:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.467 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb  2 12:48:07 np0005605476 nova_compute[239846]: 2026-02-02 17:48:07.488 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187140716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.142 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.147 239853 DEBUG nova.compute.provider_tree [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.187 239853 DEBUG nova.scheduler.client.report [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.403 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.404 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.549 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.549 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:48:08 np0005605476 podman[250730]: 2026-02-02 17:48:08.602751424 +0000 UTC m=+0.049264314 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.657 239853 INFO nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.710 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.798 239853 INFO nova.virt.block_device [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Booting with volume 4e86e1d0-a313-4a91-bd24-41503d2238a5 at /dev/vda#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.977 239853 DEBUG os_brick.utils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.983 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.991 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.991 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ab61fd-d662-4d3c-8cf1-28c2af7b7568]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:08 np0005605476 nova_compute[239846]: 2026-02-02 17:48:08.992 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.000 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.000 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[4428e4be-c6d4-40a2-872e-a9d4bfaed35e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.001 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.008 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.008 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[b3091a8c-7b97-4153-9998-f5bd82e10bb0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.009 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75e460-3f66-473e-857c-6b3dfb67b93b]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.010 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.025 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.028 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.029 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.029 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.030 239853 DEBUG os_brick.utils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.030 239853 DEBUG nova.virt.block_device [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating existing volume attachment record: 51397bea-8881-45dc-9b0a-06925e1f09b1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.044 239853 DEBUG nova.policy [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '54155456326c45d8b04d2cc748cac4b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a421a2228c5b482197ddfa633ea50690', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:48:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 299 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 74 op/s
Feb  2 12:48:09 np0005605476 nova_compute[239846]: 2026-02-02 17:48:09.563 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1581439957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.278353609 +0000 UTC m=+0.027394730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.408298046 +0000 UTC m=+0.157339117 container create d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:10 np0005605476 systemd[1]: Started libpod-conmon-d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52.scope.
Feb  2 12:48:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.601716694 +0000 UTC m=+0.350757795 container init d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.610013547 +0000 UTC m=+0.359054618 container start d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:48:10 np0005605476 blissful_kalam[250917]: 167 167
Feb  2 12:48:10 np0005605476 systemd[1]: libpod-d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52.scope: Deactivated successfully.
Feb  2 12:48:10 np0005605476 conmon[250917]: conmon d8082662501da2a08341 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52.scope/container/memory.events
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.620181552 +0000 UTC m=+0.369222723 container attach d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.620665516 +0000 UTC m=+0.369706627 container died d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:48:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-802a6d4768fd8f674058a7973a632fc2245831ff297b7f4bf4b56d72bcce9c08-merged.mount: Deactivated successfully.
Feb  2 12:48:10 np0005605476 podman[250900]: 2026-02-02 17:48:10.776690274 +0000 UTC m=+0.525731355 container remove d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:48:10 np0005605476 systemd[1]: libpod-conmon-d8082662501da2a0834131a8b4ed475e9aaaa5f9e6bb7db54436ad43389e9a52.scope: Deactivated successfully.
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.793 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.795 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.796 239853 INFO nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Creating image(s)#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.796 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.796 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Ensure instance console log exists: /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.797 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.797 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:10 np0005605476 nova_compute[239846]: 2026-02-02 17:48:10.797 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:10 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:48:10 np0005605476 podman[250943]: 2026-02-02 17:48:10.93081201 +0000 UTC m=+0.046602919 container create be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:48:10 np0005605476 systemd[1]: Started libpod-conmon-be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b.scope.
Feb  2 12:48:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:10.907141635 +0000 UTC m=+0.022932564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:11.018291645 +0000 UTC m=+0.134082574 container init be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:11.028741808 +0000 UTC m=+0.144532737 container start be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:11.034338825 +0000 UTC m=+0.150129734 container attach be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:48:11 np0005605476 nova_compute[239846]: 2026-02-02 17:48:11.067 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Successfully created port: f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:48:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 365 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.8 MiB/s wr, 133 op/s
Feb  2 12:48:11 np0005605476 silly_dubinsky[250959]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:48:11 np0005605476 silly_dubinsky[250959]: --> All data devices are unavailable
Feb  2 12:48:11 np0005605476 systemd[1]: libpod-be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b.scope: Deactivated successfully.
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:11.429388912 +0000 UTC m=+0.545179831 container died be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:48:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8b2baf646dd49eda84c92e6b559e99521ada3f496064061e73d1165cd77d7b46-merged.mount: Deactivated successfully.
Feb  2 12:48:11 np0005605476 podman[250943]: 2026-02-02 17:48:11.870688976 +0000 UTC m=+0.986479895 container remove be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:48:11 np0005605476 systemd[1]: libpod-conmon-be31ac4d4d27430fccfd715d252eee796b4761637af013a954d55ec9fe51bb2b.scope: Deactivated successfully.
Feb  2 12:48:11 np0005605476 nova_compute[239846]: 2026-02-02 17:48:11.950 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Successfully updated port: f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.001 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.001 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.002 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:48:12 np0005605476 podman[250991]: 2026-02-02 17:48:12.032093396 +0000 UTC m=+0.089866603 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.061 239853 DEBUG nova.compute.manager [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.061 239853 DEBUG nova.compute.manager [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing instance network info cache due to event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.061 239853 DEBUG oslo_concurrency.lockutils [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.157 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.306448145 +0000 UTC m=+0.039569081 container create 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:48:12 np0005605476 systemd[1]: Started libpod-conmon-581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322.scope.
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.284593702 +0000 UTC m=+0.017714658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:12 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.431 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.491 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.493804403 +0000 UTC m=+0.226925369 container init 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.494 239853 WARNING nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.494 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Triggering sync for uuid 0321c65d-e38f-4479-8c6e-d5bc3fcf809e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb  2 12:48:12 np0005605476 nova_compute[239846]: 2026-02-02 17:48:12.495 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.49939965 +0000 UTC m=+0.232520586 container start 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:48:12 np0005605476 confident_ardinghelli[251093]: 167 167
Feb  2 12:48:12 np0005605476 systemd[1]: libpod-581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322.scope: Deactivated successfully.
Feb  2 12:48:12 np0005605476 conmon[251093]: conmon 581af7b74ff9e4d2a13e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322.scope/container/memory.events
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.527986163 +0000 UTC m=+0.261107109 container attach 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.528326772 +0000 UTC m=+0.261447728 container died 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:48:12 np0005605476 systemd[1]: var-lib-containers-storage-overlay-87173a05310bfe5483aad6c58070a1ac10d1f9f7727f26f800b95d1293b29959-merged.mount: Deactivated successfully.
Feb  2 12:48:12 np0005605476 podman[251077]: 2026-02-02 17:48:12.806805458 +0000 UTC m=+0.539926384 container remove 581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ardinghelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:12 np0005605476 systemd[1]: libpod-conmon-581af7b74ff9e4d2a13ea073e2cf28a098f43dfb4a1995700e4b101a24f08322.scope: Deactivated successfully.
Feb  2 12:48:12 np0005605476 podman[251118]: 2026-02-02 17:48:12.96974233 +0000 UTC m=+0.056602989 container create 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:48:13 np0005605476 systemd[1]: Started libpod-conmon-50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb.scope.
Feb  2 12:48:13 np0005605476 podman[251118]: 2026-02-02 17:48:12.935906441 +0000 UTC m=+0.022767100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b9b61976f3a73021433fa783f422ad0deefc5def858ca1c7f7d54225ce3ae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b9b61976f3a73021433fa783f422ad0deefc5def858ca1c7f7d54225ce3ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b9b61976f3a73021433fa783f422ad0deefc5def858ca1c7f7d54225ce3ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b9b61976f3a73021433fa783f422ad0deefc5def858ca1c7f7d54225ce3ae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.140 239853 DEBUG nova.network.neutron [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:13 np0005605476 podman[251118]: 2026-02-02 17:48:13.151505121 +0000 UTC m=+0.238365750 container init 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:48:13 np0005605476 podman[251118]: 2026-02-02 17:48:13.158450856 +0000 UTC m=+0.245311485 container start 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:48:13 np0005605476 podman[251118]: 2026-02-02 17:48:13.213364577 +0000 UTC m=+0.300225246 container attach 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:48:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/81921511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.231 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.231 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Instance network_info: |[{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.231 239853 DEBUG oslo_concurrency.lockutils [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.232 239853 DEBUG nova.network.neutron [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.236 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Start _get_guest_xml network_info=[{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '51397bea-8881-45dc-9b0a-06925e1f09b1', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4e86e1d0-a313-4a91-bd24-41503d2238a5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4e86e1d0-a313-4a91-bd24-41503d2238a5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0321c65d-e38f-4479-8c6e-d5bc3fcf809e', 'attached_at': '', 'detached_at': '', 'volume_id': '4e86e1d0-a313-4a91-bd24-41503d2238a5', 'serial': '4e86e1d0-a313-4a91-bd24-41503d2238a5'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.241 239853 WARNING nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.246 239853 DEBUG nova.virt.libvirt.host [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.246 239853 DEBUG nova.virt.libvirt.host [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.249 239853 DEBUG nova.virt.libvirt.host [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.250 239853 DEBUG nova.virt.libvirt.host [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.250 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.250 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.251 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.251 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.251 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.252 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.252 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.252 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.253 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.253 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.253 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.254 239853 DEBUG nova.virt.hardware [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.283 239853 DEBUG nova.storage.rbd_utils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] rbd image 0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.287 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 365 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 104 op/s
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]: {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    "0": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "devices": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "/dev/loop3"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            ],
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_name": "ceph_lv0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_size": "21470642176",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "name": "ceph_lv0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "tags": {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_name": "ceph",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.crush_device_class": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.encrypted": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.objectstore": "bluestore",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_id": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.vdo": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.with_tpm": "0"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            },
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "vg_name": "ceph_vg0"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        }
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    ],
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    "1": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "devices": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "/dev/loop4"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            ],
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_name": "ceph_lv1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_size": "21470642176",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "name": "ceph_lv1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "tags": {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_name": "ceph",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.crush_device_class": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.encrypted": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.objectstore": "bluestore",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_id": "1",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.vdo": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.with_tpm": "0"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            },
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "vg_name": "ceph_vg1"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        }
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    ],
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    "2": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "devices": [
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "/dev/loop5"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            ],
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_name": "ceph_lv2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_size": "21470642176",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "name": "ceph_lv2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "tags": {
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.cluster_name": "ceph",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.crush_device_class": "",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.encrypted": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.objectstore": "bluestore",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osd_id": "2",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.vdo": "0",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:                "ceph.with_tpm": "0"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            },
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "type": "block",
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:            "vg_name": "ceph_vg2"
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:        }
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]:    ]
Feb  2 12:48:13 np0005605476 inspiring_varahamihira[251135]: }
Feb  2 12:48:13 np0005605476 systemd[1]: libpod-50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb.scope: Deactivated successfully.
Feb  2 12:48:13 np0005605476 podman[251118]: 2026-02-02 17:48:13.441438758 +0000 UTC m=+0.528299387 container died 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay-70b9b61976f3a73021433fa783f422ad0deefc5def858ca1c7f7d54225ce3ae3-merged.mount: Deactivated successfully.
Feb  2 12:48:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506072742' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.832 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.908 239853 DEBUG nova.virt.libvirt.vif [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1557875444',display_name='tempest-TestVolumeBackupRestore-server-1557875444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1557875444',id=6,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHOARfJpYPNxWPGY5FhhsSMyZRNhJtTLb+/6KZXTagVhDZNSjQcNKBjLmDKeCXZ+h82KxHqgfYSr9gJZi9j5XrB8u89YouhAkHtzeGJK083dmd6INejDtLxrfPjwBzBfOw==',key_name='tempest-TestVolumeBackupRestore-584902739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a421a2228c5b482197ddfa633ea50690',ramdisk_id='',reservation_id='r-co3uvyiq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1490140995',owner_user_name='tempest-TestVolumeBackupRestore-1490140995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:08Z,user_data=None,user_id='54155456326c45d8b04d2cc748cac4b1',uuid=0321c65d-e38f-4479-8c6e-d5bc3fcf809e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.909 239853 DEBUG nova.network.os_vif_util [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converting VIF {"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.911 239853 DEBUG nova.network.os_vif_util [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:13 np0005605476 nova_compute[239846]: 2026-02-02 17:48:13.914 239853 DEBUG nova.objects.instance [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0321c65d-e38f-4479-8c6e-d5bc3fcf809e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:14 np0005605476 podman[251118]: 2026-02-02 17:48:14.072322483 +0000 UTC m=+1.159183142 container remove 50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.087 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <uuid>0321c65d-e38f-4479-8c6e-d5bc3fcf809e</uuid>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <name>instance-00000006</name>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBackupRestore-server-1557875444</nova:name>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:48:13</nova:creationTime>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:user uuid="54155456326c45d8b04d2cc748cac4b1">tempest-TestVolumeBackupRestore-1490140995-project-member</nova:user>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:project uuid="a421a2228c5b482197ddfa633ea50690">tempest-TestVolumeBackupRestore-1490140995</nova:project>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <nova:port uuid="f35a02cf-f83c-44c3-a9f5-ada38e9b9db3">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="serial">0321c65d-e38f-4479-8c6e-d5bc3fcf809e</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="uuid">0321c65d-e38f-4479-8c6e-d5bc3fcf809e</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-4e86e1d0-a313-4a91-bd24-41503d2238a5">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <serial>4e86e1d0-a313-4a91-bd24-41503d2238a5</serial>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:86:40:fd"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <target dev="tapf35a02cf-f8"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/console.log" append="off"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:48:14 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:48:14 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:48:14 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:48:14 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.088 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Preparing to wait for external event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.088 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.088 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.089 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.089 239853 DEBUG nova.virt.libvirt.vif [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1557875444',display_name='tempest-TestVolumeBackupRestore-server-1557875444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1557875444',id=6,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHOARfJpYPNxWPGY5FhhsSMyZRNhJtTLb+/6KZXTagVhDZNSjQcNKBjLmDKeCXZ+h82KxHqgfYSr9gJZi9j5XrB8u89YouhAkHtzeGJK083dmd6INejDtLxrfPjwBzBfOw==',key_name='tempest-TestVolumeBackupRestore-584902739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a421a2228c5b482197ddfa633ea50690',ramdisk_id='',reservation_id='r-co3uvyiq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1490140995',owner_user_name='tempest-TestVolumeBackupRestore-1490140995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:08Z,user_data=None,user_id='54155456326c45d8b04d2cc748cac4b1',uuid=0321c65d-e38f-4479-8c6e-d5bc3fcf809e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.089 239853 DEBUG nova.network.os_vif_util [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converting VIF {"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.090 239853 DEBUG nova.network.os_vif_util [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.090 239853 DEBUG os_vif [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.091 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.091 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.091 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.093 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.094 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf35a02cf-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.094 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf35a02cf-f8, col_values=(('external_ids', {'iface-id': 'f35a02cf-f83c-44c3-a9f5-ada38e9b9db3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:40:fd', 'vm-uuid': '0321c65d-e38f-4479-8c6e-d5bc3fcf809e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.095 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:14 np0005605476 NetworkManager[49022]: <info>  [1770054494.0969] manager: (tapf35a02cf-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.097 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.101 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.102 239853 INFO os_vif [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8')#033[00m
Feb  2 12:48:14 np0005605476 systemd[1]: libpod-conmon-50a587987cb7e096c42a520e5dd77943db9e8b1af3b0bb89d3ad6877fcfd7afb.scope: Deactivated successfully.
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.168 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.169 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.169 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] No VIF found with MAC fa:16:3e:86:40:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.169 239853 INFO nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Using config drive#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.189 239853 DEBUG nova.storage.rbd_utils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] rbd image 0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.362 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.362 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.375 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.460 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.461 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.468 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.468 239853 INFO nova.compute.claims [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.536009737 +0000 UTC m=+0.045981322 container create 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.587 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.607 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.517912879 +0000 UTC m=+0.027884524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:14 np0005605476 systemd[1]: Started libpod-conmon-94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14.scope.
Feb  2 12:48:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.648663908 +0000 UTC m=+0.158635523 container init 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.656348494 +0000 UTC m=+0.166320099 container start 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.660155121 +0000 UTC m=+0.170126706 container attach 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:48:14 np0005605476 gracious_shockley[251297]: 167 167
Feb  2 12:48:14 np0005605476 systemd[1]: libpod-94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14.scope: Deactivated successfully.
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.662654271 +0000 UTC m=+0.172625876 container died 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:48:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-926867a94ccddec93dc259a6f319f41b1ad9f5740e71677147713f1f9d820c9e-merged.mount: Deactivated successfully.
Feb  2 12:48:14 np0005605476 podman[251280]: 2026-02-02 17:48:14.697144099 +0000 UTC m=+0.207115684 container remove 94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:48:14 np0005605476 systemd[1]: libpod-conmon-94509a7684fc105c69c44a477067a06aa55df4cba5c7ac77d58ad27c55de3c14.scope: Deactivated successfully.
Feb  2 12:48:14 np0005605476 podman[251340]: 2026-02-02 17:48:14.810176221 +0000 UTC m=+0.033884652 container create 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:48:14 np0005605476 systemd[1]: Started libpod-conmon-8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45.scope.
Feb  2 12:48:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f47539ef18dd0528cae6ec8a26458b3500ba2d839c8d1604a13ee89e6246093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f47539ef18dd0528cae6ec8a26458b3500ba2d839c8d1604a13ee89e6246093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f47539ef18dd0528cae6ec8a26458b3500ba2d839c8d1604a13ee89e6246093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:14 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f47539ef18dd0528cae6ec8a26458b3500ba2d839c8d1604a13ee89e6246093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:14 np0005605476 podman[251340]: 2026-02-02 17:48:14.873666343 +0000 UTC m=+0.097374794 container init 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:48:14 np0005605476 podman[251340]: 2026-02-02 17:48:14.87962008 +0000 UTC m=+0.103328561 container start 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:48:14 np0005605476 podman[251340]: 2026-02-02 17:48:14.884086905 +0000 UTC m=+0.107795356 container attach 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:48:14 np0005605476 podman[251340]: 2026-02-02 17:48:14.795209821 +0000 UTC m=+0.018918272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:48:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:14 np0005605476 nova_compute[239846]: 2026-02-02 17:48:14.996 239853 INFO nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Creating config drive at /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.001 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp74_o2w3e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.121 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp74_o2w3e" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529134968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.149 239853 DEBUG nova.storage.rbd_utils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] rbd image 0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.154 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config 0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.178 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.184 239853 DEBUG nova.compute.provider_tree [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.212 239853 DEBUG nova.scheduler.client.report [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.243 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.244 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.295 239853 DEBUG oslo_concurrency.processutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config 0321c65d-e38f-4479-8c6e-d5bc3fcf809e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.295 239853 INFO nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Deleting local config drive /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e/disk.config because it was imported into RBD.#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.308 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.308 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.334 239853 INFO nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.342 239853 DEBUG nova.network.neutron [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated VIF entry in instance network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.342 239853 DEBUG nova.network.neutron [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.3480] manager: (tapf35a02cf-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Feb  2 12:48:15 np0005605476 kernel: tapf35a02cf-f8: entered promiscuous mode
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.354 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:15Z|00080|binding|INFO|Claiming lport f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 for this chassis.
Feb  2 12:48:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:15Z|00081|binding|INFO|f35a02cf-f83c-44c3-a9f5-ada38e9b9db3: Claiming fa:16:3e:86:40:fd 10.100.0.8
Feb  2 12:48:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:15Z|00082|binding|INFO|Setting lport f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 ovn-installed in OVS
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.362 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:40:fd 10.100.0.8'], port_security=['fa:16:3e:86:40:fd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0321c65d-e38f-4479-8c6e-d5bc3fcf809e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a421a2228c5b482197ddfa633ea50690', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adacdbcc-bf38-4d82-bc30-c30a2432b1e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edaada5-7f3c-4804-8a74-c76131a9830c, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.363 155391 INFO neutron.agent.ovn.metadata.agent [-] Port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 in datapath 553031b4-d4b3-44d8-b2b1-82cbbfe28d8f bound to our chassis#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.364 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 553031b4-d4b3-44d8-b2b1-82cbbfe28d8f#033[00m
Feb  2 12:48:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:15Z|00083|binding|INFO|Setting lport f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 up in Southbound
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.367 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.370 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.373 239853 DEBUG oslo_concurrency.lockutils [req-b2514d1f-61ea-4a17-8604-5f7704eb8896 req-4a6d0da7-e4d8-4410-9fa8-f56a3bb0b887 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.374 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[734c190e-d812-47fa-b6ca-8aed3ebbc7dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.375 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap553031b4-d1 in ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.378 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap553031b4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.378 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b859bc0e-10c6-4930-ab84-e6f38b7e9907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.379 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a07d7-d5f0-45d9-a1a7-cc6c3edd3fd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 systemd-machined[208080]: New machine qemu-6-instance-00000006.
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.391 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4b9731-f0c8-4c51-b220-a151b23ee35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb  2 12:48:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 381 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.4 MiB/s wr, 126 op/s
Feb  2 12:48:15 np0005605476 systemd-udevd[251475]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.4227] device (tapf35a02cf-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.4235] device (tapf35a02cf-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.420 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a7b930-754c-4227-b14d-298a211ca03c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.444 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[786aa973-a3e7-4a1c-a82d-4eb4959eb61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.452 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b38aea0-932e-4bc7-ae3f-a28a1a8021b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.4533] manager: (tap553031b4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Feb  2 12:48:15 np0005605476 systemd-udevd[251478]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.483 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.485 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.486 239853 INFO nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Creating image(s)#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.491 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaedabe-7123-4655-bc61-e9510de82a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.495 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f529015c-2997-47b5-a329-6cb92dfcfb91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.512 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.5163] device (tap553031b4-d0): carrier: link connected
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.521 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[cef4d60c-7e2d-4e67-8160-c0736db499ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.542 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.543 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2249a1de-521f-4cc9-bae0-10d184564281]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap553031b4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:1c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373684, 'reachable_time': 15074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251546, 'error': None, 'target': 'ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.558 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[973d2386-ba83-479b-a153-15bfe547b1e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:1c11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373684, 'tstamp': 373684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251559, 'error': None, 'target': 'ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 lvm[251560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:48:15 np0005605476 lvm[251560]: VG ceph_vg0 finished
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.564 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.569 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.573 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9daa09b8-483e-4b05-9486-758f4aabce22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap553031b4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:1c:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373684, 'reachable_time': 15074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251577, 'error': None, 'target': 'ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 lvm[251581]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:48:15 np0005605476 lvm[251581]: VG ceph_vg1 finished
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.587 239853 DEBUG nova.policy [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5e6162e875a40d7b58553a223857aa3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.591 239853 DEBUG nova.compute.manager [req-ed547150-327f-4b69-b13e-9ecd896bb9a7 req-06349a83-4822-4ecb-be0a-f912a15da250 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.592 239853 DEBUG oslo_concurrency.lockutils [req-ed547150-327f-4b69-b13e-9ecd896bb9a7 req-06349a83-4822-4ecb-be0a-f912a15da250 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.592 239853 DEBUG oslo_concurrency.lockutils [req-ed547150-327f-4b69-b13e-9ecd896bb9a7 req-06349a83-4822-4ecb-be0a-f912a15da250 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.592 239853 DEBUG oslo_concurrency.lockutils [req-ed547150-327f-4b69-b13e-9ecd896bb9a7 req-06349a83-4822-4ecb-be0a-f912a15da250 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.592 239853 DEBUG nova.compute.manager [req-ed547150-327f-4b69-b13e-9ecd896bb9a7 req-06349a83-4822-4ecb-be0a-f912a15da250 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Processing event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.600 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3ca9ad-c056-4644-ab9b-d43cf69675f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 lvm[251585]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:48:15 np0005605476 lvm[251585]: VG ceph_vg2 finished
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.624 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.625 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.625 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.625 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.650 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.662 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 f386639b-0601-4234-b5b2-2c91952427d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.667 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4bfe30-e59f-48f7-9fd3-d55eec626487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.668 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap553031b4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.669 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.669 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap553031b4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:15 np0005605476 kernel: tap553031b4-d0: entered promiscuous mode
Feb  2 12:48:15 np0005605476 NetworkManager[49022]: <info>  [1770054495.7259] manager: (tap553031b4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.729 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 unruffled_goldwasser[251357]: {}
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.735 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap553031b4-d0, col_values=(('external_ids', {'iface-id': 'ee9a2586-5389-4d0d-9b9a-ef52623a5006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:15Z|00084|binding|INFO|Releasing lport ee9a2586-5389-4d0d-9b9a-ef52623a5006 from this chassis (sb_readonly=0)
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.738 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/553031b4-d4b3-44d8-b2b1-82cbbfe28d8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/553031b4-d4b3-44d8-b2b1-82cbbfe28d8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.737 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.737 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.743 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7222449-062e-4dc5-832f-b455beb9f1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.744 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.745 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/553031b4-d4b3-44d8-b2b1-82cbbfe28d8f.pid.haproxy
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 553031b4-d4b3-44d8-b2b1-82cbbfe28d8f
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:48:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:15.747 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'env', 'PROCESS_TAG=haproxy-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/553031b4-d4b3-44d8-b2b1-82cbbfe28d8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:48:15 np0005605476 systemd[1]: libpod-8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45.scope: Deactivated successfully.
Feb  2 12:48:15 np0005605476 systemd[1]: libpod-8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45.scope: Consumed 1.179s CPU time.
Feb  2 12:48:15 np0005605476 podman[251340]: 2026-02-02 17:48:15.762497647 +0000 UTC m=+0.986206078 container died 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:48:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2f47539ef18dd0528cae6ec8a26458b3500ba2d839c8d1604a13ee89e6246093-merged.mount: Deactivated successfully.
Feb  2 12:48:15 np0005605476 podman[251340]: 2026-02-02 17:48:15.815085553 +0000 UTC m=+1.038793984 container remove 8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_goldwasser, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:48:15 np0005605476 systemd[1]: libpod-conmon-8d61fe32d1e3fad57058ec12eb4741bc01aca330d7ea0ff927fbf780ca485c45.scope: Deactivated successfully.
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.866 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.868 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054495.8680413, 0321c65d-e38f-4479-8c6e-d5bc3fcf809e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.868 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] VM Started (Lifecycle Event)#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.872 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.875 239853 INFO nova.virt.libvirt.driver [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Instance spawned successfully.#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.875 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:48:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.913 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 f386639b-0601-4234-b5b2-2c91952427d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:15 np0005605476 nova_compute[239846]: 2026-02-02 17:48:15.973 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] resizing rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.045 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.052 239853 DEBUG nova.objects.instance [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'migration_context' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.055 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.056 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.056 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.056 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.057 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.057 239853 DEBUG nova.virt.libvirt.driver [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.062 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.066 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.067 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Ensure instance console log exists: /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.067 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.067 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.068 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:16 np0005605476 podman[251802]: 2026-02-02 17:48:16.085031329 +0000 UTC m=+0.042645778 container create d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.093 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.094 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054495.871772, 0321c65d-e38f-4479-8c6e-d5bc3fcf809e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.094 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:48:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:16 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.120 239853 INFO nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Took 5.33 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.120 239853 DEBUG nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.123 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.130 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054495.8721042, 0321c65d-e38f-4479-8c6e-d5bc3fcf809e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.130 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:48:16 np0005605476 systemd[1]: Started libpod-conmon-d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011.scope.
Feb  2 12:48:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:16 np0005605476 podman[251802]: 2026-02-02 17:48:16.060253333 +0000 UTC m=+0.017867752 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:48:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeebf1590106a989136bd4bfbd1b861d061236f4ac8f82d64e8e23fa591c9b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.165 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.167 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:16 np0005605476 podman[251802]: 2026-02-02 17:48:16.170158658 +0000 UTC m=+0.127773067 container init d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:48:16 np0005605476 podman[251802]: 2026-02-02 17:48:16.17379749 +0000 UTC m=+0.131411899 container start d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:48:16 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [NOTICE]   (251824) : New worker (251826) forked
Feb  2 12:48:16 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [NOTICE]   (251824) : Loading success.
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.196 239853 INFO nova.compute.manager [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Took 9.04 seconds to build instance.#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.212 239853 DEBUG oslo_concurrency.lockutils [None req-1c62bd7a-e249-4389-b2f6-fab0ac337877 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.213 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.213 239853 INFO nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.213 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:16.348 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.350 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:16.350 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:48:16 np0005605476 nova_compute[239846]: 2026-02-02 17:48:16.436 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Successfully created port: 6ab68bc5-611f-4eb0-b660-c813917142b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb  2 12:48:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 405 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 97 op/s
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910268849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910268849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.628 239853 DEBUG nova.compute.manager [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.629 239853 DEBUG oslo_concurrency.lockutils [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.629 239853 DEBUG oslo_concurrency.lockutils [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.630 239853 DEBUG oslo_concurrency.lockutils [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.630 239853 DEBUG nova.compute.manager [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] No waiting events found dispatching network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.631 239853 WARNING nova.compute.manager [req-aeec1a5c-0419-4e9f-b0c0-2813be80a4d0 req-fb2fd5f0-e94c-4e0f-9abc-a556c9859405 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received unexpected event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.899 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Successfully updated port: 6ab68bc5-611f-4eb0-b660-c813917142b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.919 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.920 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquired lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:17 np0005605476 nova_compute[239846]: 2026-02-02 17:48:17.920 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.014 239853 DEBUG nova.compute.manager [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-changed-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.014 239853 DEBUG nova.compute.manager [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Refreshing instance network info cache due to event network-changed-6ab68bc5-611f-4eb0-b660-c813917142b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.015 239853 DEBUG oslo_concurrency.lockutils [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.072 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.940 239853 DEBUG nova.network.neutron [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updating instance_info_cache with network_info: [{"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.964 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Releasing lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.965 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Instance network_info: |[{"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.965 239853 DEBUG oslo_concurrency.lockutils [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.965 239853 DEBUG nova.network.neutron [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Refreshing network info cache for port 6ab68bc5-611f-4eb0-b660-c813917142b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.971 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Start _get_guest_xml network_info=[{"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.980 239853 WARNING nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.988 239853 DEBUG nova.virt.libvirt.host [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.989 239853 DEBUG nova.virt.libvirt.host [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.993 239853 DEBUG nova.virt.libvirt.host [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.993 239853 DEBUG nova.virt.libvirt.host [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.994 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.994 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.994 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.995 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.996 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.996 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.996 239853 DEBUG nova.virt.hardware [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:48:18 np0005605476 nova_compute[239846]: 2026-02-02 17:48:18.998 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.138 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb  2 12:48:19 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:19.353 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 420 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.9 MiB/s wr, 122 op/s
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248065078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.582 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.602 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.606 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.619 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.828 239853 DEBUG nova.compute.manager [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.829 239853 DEBUG nova.compute.manager [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing instance network info cache due to event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.829 239853 DEBUG oslo_concurrency.lockutils [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.830 239853 DEBUG oslo_concurrency.lockutils [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:19 np0005605476 nova_compute[239846]: 2026-02-02 17:48:19.830 239853 DEBUG nova.network.neutron [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137369377' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.184 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.186 239853 DEBUG nova.virt.libvirt.vif [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-75956878',display_name='tempest-VolumesSnapshotTestJSON-instance-75956878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-75956878',id=7,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8wjp+qPXwD4f81w+HED91hMzHd3E1i2CygQgcgjVyWX/dpxgK3Z22b8YQcDjt960It4Qgk4Vv9OcKZnbt0CjDMpqynug2JK/j/lDIHHq5f6XBLWQJJQbbAzZ7fX7z2og==',key_name='tempest-keypair-1157652695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-y5az1t0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=f386639b-0601-4234-b5b2-2c91952427d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.187 239853 DEBUG nova.network.os_vif_util [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.188 239853 DEBUG nova.network.os_vif_util [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.189 239853 DEBUG nova.objects.instance [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'pci_devices' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.206 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <uuid>f386639b-0601-4234-b5b2-2c91952427d4</uuid>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <name>instance-00000007</name>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-75956878</nova:name>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:48:18</nova:creationTime>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:user uuid="e5e6162e875a40d7b58553a223857aa3">tempest-VolumesSnapshotTestJSON-2080120933-project-member</nova:user>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:project uuid="a06203a436464cf3968b3ecfc022e1dd">tempest-VolumesSnapshotTestJSON-2080120933</nova:project>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <nova:port uuid="6ab68bc5-611f-4eb0-b660-c813917142b8">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="serial">f386639b-0601-4234-b5b2-2c91952427d4</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="uuid">f386639b-0601-4234-b5b2-2c91952427d4</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/f386639b-0601-4234-b5b2-2c91952427d4_disk">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/f386639b-0601-4234-b5b2-2c91952427d4_disk.config">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:69:3c:81"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <target dev="tap6ab68bc5-61"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/console.log" append="off"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:48:20 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:48:20 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:48:20 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:48:20 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.207 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Preparing to wait for external event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.208 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.208 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.208 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.210 239853 DEBUG nova.virt.libvirt.vif [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-75956878',display_name='tempest-VolumesSnapshotTestJSON-instance-75956878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-75956878',id=7,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8wjp+qPXwD4f81w+HED91hMzHd3E1i2CygQgcgjVyWX/dpxgK3Z22b8YQcDjt960It4Qgk4Vv9OcKZnbt0CjDMpqynug2JK/j/lDIHHq5f6XBLWQJJQbbAzZ7fX7z2og==',key_name='tempest-keypair-1157652695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-y5az1t0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=f386639b-0601-4234-b5b2-2c91952427d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.210 239853 DEBUG nova.network.os_vif_util [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.211 239853 DEBUG nova.network.os_vif_util [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.211 239853 DEBUG os_vif [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.212 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.213 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.213 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.217 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.218 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ab68bc5-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.218 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ab68bc5-61, col_values=(('external_ids', {'iface-id': '6ab68bc5-611f-4eb0-b660-c813917142b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:3c:81', 'vm-uuid': 'f386639b-0601-4234-b5b2-2c91952427d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.220 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 NetworkManager[49022]: <info>  [1770054500.2211] manager: (tap6ab68bc5-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.223 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.229 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.230 239853 INFO os_vif [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61')#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.275 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.275 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.276 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No VIF found with MAC fa:16:3e:69:3c:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.276 239853 INFO nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Using config drive#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.299 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.600 239853 INFO nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Creating config drive at /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.607 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp25wt3554 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.654 239853 DEBUG nova.network.neutron [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updated VIF entry in instance network info cache for port 6ab68bc5-611f-4eb0-b660-c813917142b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.655 239853 DEBUG nova.network.neutron [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updating instance_info_cache with network_info: [{"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.674 239853 DEBUG oslo_concurrency.lockutils [req-b533d6c2-f534-45d6-950e-4acd1dc1f239 req-ca9be6ca-80b0-4c26-aa19-3fd885a02ef8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.727 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp25wt3554" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.751 239853 DEBUG nova.storage.rbd_utils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image f386639b-0601-4234-b5b2-2c91952427d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.753 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config f386639b-0601-4234-b5b2-2c91952427d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.861 239853 DEBUG oslo_concurrency.processutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config f386639b-0601-4234-b5b2-2c91952427d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.862 239853 INFO nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Deleting local config drive /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4/disk.config because it was imported into RBD.#033[00m
Feb  2 12:48:20 np0005605476 kernel: tap6ab68bc5-61: entered promiscuous mode
Feb  2 12:48:20 np0005605476 NetworkManager[49022]: <info>  [1770054500.8990] manager: (tap6ab68bc5-61): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb  2 12:48:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:20Z|00085|binding|INFO|Claiming lport 6ab68bc5-611f-4eb0-b660-c813917142b8 for this chassis.
Feb  2 12:48:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:20Z|00086|binding|INFO|6ab68bc5-611f-4eb0-b660-c813917142b8: Claiming fa:16:3e:69:3c:81 10.100.0.13
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.901 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.908 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:3c:81 10.100.0.13'], port_security=['fa:16:3e:69:3c:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f386639b-0601-4234-b5b2-2c91952427d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b00a155c-f468-43b5-8966-400475f07a2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e7308365-475e-42ee-aa15-1cfc5e7f4d4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be054af0-a896-42b9-84a2-8460e7163b78, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=6ab68bc5-611f-4eb0-b660-c813917142b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.909 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 6ab68bc5-611f-4eb0-b660-c813917142b8 in datapath b00a155c-f468-43b5-8966-400475f07a2d bound to our chassis#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.910 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b00a155c-f468-43b5-8966-400475f07a2d#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.919 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1f5fe0-390e-4f2c-a855-cd77cd14eba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.919 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb00a155c-f1 in ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:48:20 np0005605476 nova_compute[239846]: 2026-02-02 17:48:20.919 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:20Z|00087|binding|INFO|Setting lport 6ab68bc5-611f-4eb0-b660-c813917142b8 ovn-installed in OVS
Feb  2 12:48:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:20Z|00088|binding|INFO|Setting lport 6ab68bc5-611f-4eb0-b660-c813917142b8 up in Southbound
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.921 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb00a155c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.921 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f605a70-ecc9-4277-9aa5-c72c72e03424]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.923 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c99dd8-28af-41ec-a04a-264acc8c2d3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 systemd-machined[208080]: New machine qemu-7-instance-00000007.
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.938 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0a0758-1871-4b5d-a77d-53cb154b74a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.950 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a734540e-96e9-4638-8b3d-9f7bcdfbaf64]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 systemd-udevd[251977]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:48:20 np0005605476 NetworkManager[49022]: <info>  [1770054500.9730] device (tap6ab68bc5-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:48:20 np0005605476 NetworkManager[49022]: <info>  [1770054500.9736] device (tap6ab68bc5-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.975 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d88b586d-485d-4d57-85cf-8c0388ffa0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:20.979 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d63f7cf-d0d6-4787-a7d1-f0fc2dbfd8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:20 np0005605476 NetworkManager[49022]: <info>  [1770054500.9828] manager: (tapb00a155c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.003 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[088b96fd-3c44-4e7b-bf8a-88c8850a14fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.006 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb0ca03-c2b2-4568-a866-db8f397564b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 NetworkManager[49022]: <info>  [1770054501.0211] device (tapb00a155c-f0): carrier: link connected
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.025 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[e27f130b-6b2d-4de7-b69d-1e33b491c3ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.045 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[139e478e-8013-443b-a425-b7c8f00f7923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb00a155c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:40:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374235, 'reachable_time': 31054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252003, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.060 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a476a596-a5ae-437b-a8fa-223d312cfad1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:40a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374235, 'tstamp': 374235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252004, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.074 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[78cabcba-636a-4584-bfd1-efb268911ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb00a155c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:40:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374235, 'reachable_time': 31054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252005, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.101 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8c62e42e-5e6f-4de9-a8e4-629ccd987f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.158 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1f760dd4-d96c-4dbf-9fe6-b3fead9af495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.159 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb00a155c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.160 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.160 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb00a155c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.162 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:21 np0005605476 NetworkManager[49022]: <info>  [1770054501.1628] manager: (tapb00a155c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb  2 12:48:21 np0005605476 kernel: tapb00a155c-f0: entered promiscuous mode
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.165 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb00a155c-f0, col_values=(('external_ids', {'iface-id': 'c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:21 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:21Z|00089|binding|INFO|Releasing lport c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb from this chassis (sb_readonly=0)
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.167 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.168 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e38a34-c5e3-45de-885a-75fdfc9532dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.169 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-b00a155c-f468-43b5-8966-400475f07a2d
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID b00a155c-f468-43b5-8966-400475f07a2d
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:48:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:21.170 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'env', 'PROCESS_TAG=haproxy-b00a155c-f468-43b5-8966-400475f07a2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b00a155c-f468-43b5-8966-400475f07a2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.166 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.176 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.257 239853 DEBUG nova.compute.manager [req-b1b71dca-6a80-4101-a976-05ddbc8d6ad9 req-cea1bb8f-67f7-400c-a632-cafb383ad7f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.258 239853 DEBUG oslo_concurrency.lockutils [req-b1b71dca-6a80-4101-a976-05ddbc8d6ad9 req-cea1bb8f-67f7-400c-a632-cafb383ad7f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.258 239853 DEBUG oslo_concurrency.lockutils [req-b1b71dca-6a80-4101-a976-05ddbc8d6ad9 req-cea1bb8f-67f7-400c-a632-cafb383ad7f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.259 239853 DEBUG oslo_concurrency.lockutils [req-b1b71dca-6a80-4101-a976-05ddbc8d6ad9 req-cea1bb8f-67f7-400c-a632-cafb383ad7f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.259 239853 DEBUG nova.compute.manager [req-b1b71dca-6a80-4101-a976-05ddbc8d6ad9 req-cea1bb8f-67f7-400c-a632-cafb383ad7f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Processing event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:48:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 384 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 301 op/s
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.446 239853 DEBUG nova.network.neutron [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated VIF entry in instance network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.449 239853 DEBUG nova.network.neutron [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.472 239853 DEBUG oslo_concurrency.lockutils [req-78fbc690-fe2f-40f1-b4dd-c64b422c3a99 req-5e488996-b763-4e64-b7a5-8e6f9b53cd3f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:21 np0005605476 podman[252035]: 2026-02-02 17:48:21.481428555 +0000 UTC m=+0.047003721 container create 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 12:48:21 np0005605476 systemd[1]: Started libpod-conmon-934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1.scope.
Feb  2 12:48:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b08cf085516fa4282a50f5558301c7ec58fcfb437cef072428b863f14a7a441/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:21 np0005605476 podman[252035]: 2026-02-02 17:48:21.460726914 +0000 UTC m=+0.026302110 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:48:21 np0005605476 podman[252035]: 2026-02-02 17:48:21.567289394 +0000 UTC m=+0.132864580 container init 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 12:48:21 np0005605476 podman[252035]: 2026-02-02 17:48:21.574243809 +0000 UTC m=+0.139818975 container start 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:48:21 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [NOTICE]   (252072) : New worker (252081) forked
Feb  2 12:48:21 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [NOTICE]   (252072) : Loading success.
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.698 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054501.6981795, f386639b-0601-4234-b5b2-2c91952427d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.699 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] VM Started (Lifecycle Event)#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.702 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.708 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.711 239853 INFO nova.virt.libvirt.driver [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Instance spawned successfully.#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.712 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.733 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.745 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.748 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.749 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.749 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.750 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.751 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.752 239853 DEBUG nova.virt.libvirt.driver [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.786 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.786 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054501.6983037, f386639b-0601-4234-b5b2-2c91952427d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.787 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.841 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.845 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054501.7043617, f386639b-0601-4234-b5b2-2c91952427d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.845 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.859 239853 INFO nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Took 6.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.860 239853 DEBUG nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.896 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.899 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:21 np0005605476 nova_compute[239846]: 2026-02-02 17:48:21.984 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.046 239853 INFO nova.compute.manager [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Took 7.62 seconds to build instance.#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.084 239853 DEBUG oslo_concurrency.lockutils [None req-0392be52-9cc5-4638-a111-4213fc2b4508 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.154 239853 DEBUG nova.compute.manager [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.156 239853 DEBUG nova.compute.manager [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing instance network info cache due to event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.157 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.158 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:22 np0005605476 nova_compute[239846]: 2026-02-02 17:48:22.158 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3049367013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3049367013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.219 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated VIF entry in instance network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.219 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.242 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.242 239853 DEBUG nova.compute.manager [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.243 239853 DEBUG nova.compute.manager [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing instance network info cache due to event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.243 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.243 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.243 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.366 239853 DEBUG nova.compute.manager [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.367 239853 DEBUG oslo_concurrency.lockutils [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.367 239853 DEBUG oslo_concurrency.lockutils [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.367 239853 DEBUG oslo_concurrency.lockutils [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.367 239853 DEBUG nova.compute.manager [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] No waiting events found dispatching network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:23 np0005605476 nova_compute[239846]: 2026-02-02 17:48:23.368 239853 WARNING nova.compute.manager [req-e1f2ba1d-1270-455e-98d4-3a3a90720720 req-35f50b81-c759-4496-bce9-4a9c46998690 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received unexpected event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:48:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 384 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 278 op/s
Feb  2 12:48:24 np0005605476 nova_compute[239846]: 2026-02-02 17:48:24.506 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated VIF entry in instance network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:24 np0005605476 nova_compute[239846]: 2026-02-02 17:48:24.507 239853 DEBUG nova.network.neutron [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:24 np0005605476 nova_compute[239846]: 2026-02-02 17:48:24.522 239853 DEBUG oslo_concurrency.lockutils [req-8935d970-9461-407f-92cf-d0d4e33d82ba req-5c788aef-3bb4-4793-9d63-c003c1c66345 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:24 np0005605476 nova_compute[239846]: 2026-02-02 17:48:24.612 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb  2 12:48:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb  2 12:48:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.221 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 340 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.469 239853 DEBUG nova.compute.manager [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-changed-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.469 239853 DEBUG nova.compute.manager [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Refreshing instance network info cache due to event network-changed-6ab68bc5-611f-4eb0-b660-c813917142b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.470 239853 DEBUG oslo_concurrency.lockutils [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.470 239853 DEBUG oslo_concurrency.lockutils [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:25 np0005605476 nova_compute[239846]: 2026-02-02 17:48:25.470 239853 DEBUG nova.network.neutron [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Refreshing network info cache for port 6ab68bc5-611f-4eb0-b660-c813917142b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184500432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb  2 12:48:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb  2 12:48:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb  2 12:48:26 np0005605476 nova_compute[239846]: 2026-02-02 17:48:26.855 239853 DEBUG nova.network.neutron [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updated VIF entry in instance network info cache for port 6ab68bc5-611f-4eb0-b660-c813917142b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:26 np0005605476 nova_compute[239846]: 2026-02-02 17:48:26.856 239853 DEBUG nova.network.neutron [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updating instance_info_cache with network_info: [{"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:26 np0005605476 nova_compute[239846]: 2026-02-02 17:48:26.872 239853 DEBUG oslo_concurrency.lockutils [req-5f9911ce-303f-4fb9-b2a9-def9583f1267 req-e53ebd60-3689-4c19-9b7f-d0f313d53e3b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-f386639b-0601-4234-b5b2-2c91952427d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb  2 12:48:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb  2 12:48:26 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb  2 12:48:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:27Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:40:fd 10.100.0.8
Feb  2 12:48:27 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:27Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:40:fd 10.100.0.8
Feb  2 12:48:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 319 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 36 KiB/s wr, 222 op/s
Feb  2 12:48:28 np0005605476 nova_compute[239846]: 2026-02-02 17:48:28.306 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2905655669' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 325 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 154 KiB/s wr, 199 op/s
Feb  2 12:48:29 np0005605476 nova_compute[239846]: 2026-02-02 17:48:29.615 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb  2 12:48:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb  2 12:48:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.222 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.445 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.445 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.446 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:48:30 np0005605476 nova_compute[239846]: 2026-02-02 17:48:30.446 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0321c65d-e38f-4479-8c6e-d5bc3fcf809e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:31 np0005605476 nova_compute[239846]: 2026-02-02 17:48:31.399 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Feb  2 12:48:31 np0005605476 nova_compute[239846]: 2026-02-02 17:48:31.423 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:31 np0005605476 nova_compute[239846]: 2026-02-02 17:48:31.423 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb  2 12:48:32 np0005605476 nova_compute[239846]: 2026-02-02 17:48:32.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/570407832' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/570407832' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:33 np0005605476 nova_compute[239846]: 2026-02-02 17:48:33.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 4.0 MiB/s wr, 177 op/s
Feb  2 12:48:33 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:33Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:3c:81 10.100.0.13
Feb  2 12:48:33 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:33Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:3c:81 10.100.0.13
Feb  2 12:48:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb  2 12:48:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb  2 12:48:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.274 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.274 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.274 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.274 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.274 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.444 239853 DEBUG nova.compute.manager [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.445 239853 DEBUG nova.compute.manager [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing instance network info cache due to event network-changed-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.445 239853 DEBUG oslo_concurrency.lockutils [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.446 239853 DEBUG oslo_concurrency.lockutils [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.446 239853 DEBUG nova.network.neutron [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Refreshing network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.572 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.573 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.574 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.574 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.574 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.575 239853 INFO nova.compute.manager [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Terminating instance#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.577 239853 DEBUG nova.compute.manager [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.617 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 kernel: tapf35a02cf-f8 (unregistering): left promiscuous mode
Feb  2 12:48:34 np0005605476 NetworkManager[49022]: <info>  [1770054514.6383] device (tapf35a02cf-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:48:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:34Z|00090|binding|INFO|Releasing lport f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 from this chassis (sb_readonly=0)
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.645 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:34Z|00091|binding|INFO|Setting lport f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 down in Southbound
Feb  2 12:48:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:34Z|00092|binding|INFO|Removing iface tapf35a02cf-f8 ovn-installed in OVS
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.647 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.654 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:34.664 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:40:fd 10.100.0.8'], port_security=['fa:16:3e:86:40:fd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0321c65d-e38f-4479-8c6e-d5bc3fcf809e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a421a2228c5b482197ddfa633ea50690', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adacdbcc-bf38-4d82-bc30-c30a2432b1e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edaada5-7f3c-4804-8a74-c76131a9830c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:34.665 155391 INFO neutron.agent.ovn.metadata.agent [-] Port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 in datapath 553031b4-d4b3-44d8-b2b1-82cbbfe28d8f unbound from our chassis#033[00m
Feb  2 12:48:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:34.666 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:48:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:34.668 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6582a85f-4e28-48d8-b321-0d06cfdef991]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:34.670 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f namespace which is not needed anymore#033[00m
Feb  2 12:48:34 np0005605476 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb  2 12:48:34 np0005605476 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 11.532s CPU time.
Feb  2 12:48:34 np0005605476 systemd-machined[208080]: Machine qemu-6-instance-00000006 terminated.
Feb  2 12:48:34 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [NOTICE]   (251824) : haproxy version is 2.8.14-c23fe91
Feb  2 12:48:34 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [NOTICE]   (251824) : path to executable is /usr/sbin/haproxy
Feb  2 12:48:34 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [WARNING]  (251824) : Exiting Master process...
Feb  2 12:48:34 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [ALERT]    (251824) : Current worker (251826) exited with code 143 (Terminated)
Feb  2 12:48:34 np0005605476 neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f[251820]: [WARNING]  (251824) : All workers exited. Exiting... (0)
Feb  2 12:48:34 np0005605476 systemd[1]: libpod-d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011.scope: Deactivated successfully.
Feb  2 12:48:34 np0005605476 conmon[251820]: conmon d1d2c09407eadab446fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011.scope/container/memory.events
Feb  2 12:48:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031034666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:34 np0005605476 podman[252151]: 2026-02-02 17:48:34.791632963 +0000 UTC m=+0.042494314 container died d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.791 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.803 239853 INFO nova.virt.libvirt.driver [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Instance destroyed successfully.#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.804 239853 DEBUG nova.objects.instance [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lazy-loading 'resources' on Instance uuid 0321c65d-e38f-4479-8c6e-d5bc3fcf809e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.805 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.819 239853 DEBUG nova.virt.libvirt.vif [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1557875444',display_name='tempest-TestVolumeBackupRestore-server-1557875444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1557875444',id=6,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHOARfJpYPNxWPGY5FhhsSMyZRNhJtTLb+/6KZXTagVhDZNSjQcNKBjLmDKeCXZ+h82KxHqgfYSr9gJZi9j5XrB8u89YouhAkHtzeGJK083dmd6INejDtLxrfPjwBzBfOw==',key_name='tempest-TestVolumeBackupRestore-584902739',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:48:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a421a2228c5b482197ddfa633ea50690',ramdisk_id='',reservation_id='r-co3uvyiq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1490140995',owner_user_name='tempest-TestVolumeBackupRestore-1490140995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:48:16Z,user_data=None,user_id='54155456326c45d8b04d2cc748cac4b1',uuid=0321c65d-e38f-4479-8c6e-d5bc3fcf809e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.821 239853 DEBUG nova.network.os_vif_util [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converting VIF {"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.821 239853 DEBUG nova.network.os_vif_util [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.822 239853 DEBUG os_vif [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.823 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.824 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf35a02cf-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.825 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011-userdata-shm.mount: Deactivated successfully.
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.830 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:48:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-cbeebf1590106a989136bd4bfbd1b861d061236f4ac8f82d64e8e23fa591c9b5-merged.mount: Deactivated successfully.
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.835 239853 INFO os_vif [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:40:fd,bridge_name='br-int',has_traffic_filtering=True,id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3,network=Network(553031b4-d4b3-44d8-b2b1-82cbbfe28d8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35a02cf-f8')#033[00m
Feb  2 12:48:34 np0005605476 podman[252151]: 2026-02-02 17:48:34.910747076 +0000 UTC m=+0.161608417 container cleanup d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 12:48:34 np0005605476 systemd[1]: libpod-conmon-d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011.scope: Deactivated successfully.
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.955 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.955 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.960 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:48:34 np0005605476 nova_compute[239846]: 2026-02-02 17:48:34.960 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:48:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:35 np0005605476 podman[252211]: 2026-02-02 17:48:35.032182414 +0000 UTC m=+0.105610385 container remove d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.036 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[34086ac8-a3c9-408b-b5ce-a388fe459453]: (4, ('Mon Feb  2 05:48:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f (d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011)\nd1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011\nMon Feb  2 05:48:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f (d1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011)\nd1d2c09407eadab446fbc2bf9be1fd1cacb566daa101deffadaa02a3168b1011\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.037 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b2678ad3-1234-44ef-9c6c-e5139f0fe10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.038 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap553031b4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.040 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:35 np0005605476 kernel: tap553031b4-d0: left promiscuous mode
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.047 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.049 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[941d5a2d-5166-4e90-9c75-270f611e991d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.060 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a7807e6c-cf98-4883-95f8-977759e86079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.062 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fa5c4b-4a4c-4cf3-a418-fcf2a1059d06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.076 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[84964bfb-3d96-4b16-a960-5b253fda1c16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373677, 'reachable_time': 40854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252226, 'error': None, 'target': 'ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.078 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-553031b4-d4b3-44d8-b2b1-82cbbfe28d8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:48:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:35.079 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[b61ca26c-abb8-4840-9636-c2ac771fb64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:35 np0005605476 systemd[1]: run-netns-ovnmeta\x2d553031b4\x2dd4b3\x2d44d8\x2db2b1\x2d82cbbfe28d8f.mount: Deactivated successfully.
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.118 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.119 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=59.96712245233357GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.119 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.120 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.231 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 0321c65d-e38f-4479-8c6e-d5bc3fcf809e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.231 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance f386639b-0601-4234-b5b2-2c91952427d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.231 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.232 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.247 239853 INFO nova.virt.libvirt.driver [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Deleting instance files /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e_del#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.247 239853 INFO nova.virt.libvirt.driver [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Deletion of /var/lib/nova/instances/0321c65d-e38f-4479-8c6e-d5bc3fcf809e_del complete#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.296 239853 INFO nova.compute.manager [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.297 239853 DEBUG oslo.service.loopingcall [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.297 239853 DEBUG nova.compute.manager [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.297 239853 DEBUG nova.network.neutron [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.325 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 375 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 881 KiB/s rd, 5.5 MiB/s wr, 315 op/s
Feb  2 12:48:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031136595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.876 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.881 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:35 np0005605476 nova_compute[239846]: 2026-02-02 17:48:35.913 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.005 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.006 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.037 239853 DEBUG nova.network.neutron [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updated VIF entry in instance network info cache for port f35a02cf-f83c-44c3-a9f5-ada38e9b9db3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.038 239853 DEBUG nova.network.neutron [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [{"id": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "address": "fa:16:3e:86:40:fd", "network": {"id": "553031b4-d4b3-44d8-b2b1-82cbbfe28d8f", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2135488397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a421a2228c5b482197ddfa633ea50690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35a02cf-f8", "ovs_interfaceid": "f35a02cf-f83c-44c3-a9f5-ada38e9b9db3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.140 239853 DEBUG oslo_concurrency.lockutils [req-097f36e7-b572-46da-ba1e-96ed3d377198 req-ee828d36-9123-4e40-947e-73ec3be2a6be e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-0321c65d-e38f-4479-8c6e-d5bc3fcf809e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.384 239853 DEBUG nova.compute.manager [req-9b72bbf2-3f0c-41ed-824d-bd6f89d114a8 req-594f1527-aa67-4e80-9190-6428ec93e35a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-deleted-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.385 239853 INFO nova.compute.manager [req-9b72bbf2-3f0c-41ed-824d-bd6f89d114a8 req-594f1527-aa67-4e80-9190-6428ec93e35a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Neutron deleted interface f35a02cf-f83c-44c3-a9f5-ada38e9b9db3; detaching it from the instance and deleting it from the info cache#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.385 239853 DEBUG nova.network.neutron [req-9b72bbf2-3f0c-41ed-824d-bd6f89d114a8 req-594f1527-aa67-4e80-9190-6428ec93e35a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.395 239853 DEBUG nova.network.neutron [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.732 239853 INFO nova.compute.manager [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Took 1.43 seconds to deallocate network for instance.#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.737 239853 DEBUG nova.compute.manager [req-9b72bbf2-3f0c-41ed-824d-bd6f89d114a8 req-594f1527-aa67-4e80-9190-6428ec93e35a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Detach interface failed, port_id=f35a02cf-f83c-44c3-a9f5-ada38e9b9db3, reason: Instance 0321c65d-e38f-4479-8c6e-d5bc3fcf809e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.746 239853 DEBUG nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-unplugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.746 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.746 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.746 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] No waiting events found dispatching network-vif-unplugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-unplugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG oslo_concurrency.lockutils [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.747 239853 DEBUG nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] No waiting events found dispatching network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.748 239853 WARNING nova.compute.manager [req-8276d24a-ada0-4168-b461-d0984af69284 req-9ece44c5-3be1-4d8c-ba97-ca3d74b68e0b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Received unexpected event network-vif-plugged-f35a02cf-f83c-44c3-a9f5-ada38e9b9db3 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:48:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:48:36
Feb  2 12:48:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:48:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:48:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Feb  2 12:48:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:48:36 np0005605476 nova_compute[239846]: 2026-02-02 17:48:36.944 239853 INFO nova.compute.manager [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.006 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.056 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.057 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.108 239853 DEBUG oslo_concurrency.processutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 395 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 730 KiB/s rd, 5.1 MiB/s wr, 271 op/s
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870909514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.640 239853 DEBUG oslo_concurrency.processutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.646 239853 DEBUG nova.compute.provider_tree [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.660 239853 DEBUG nova.scheduler.client.report [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.680 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:48:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.705 239853 INFO nova.scheduler.client.report [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Deleted allocations for instance 0321c65d-e38f-4479-8c6e-d5bc3fcf809e#033[00m
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699224924' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699224924' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:37 np0005605476 nova_compute[239846]: 2026-02-02 17:48:37.777 239853 DEBUG oslo_concurrency.lockutils [None req-436f825d-1641-4f88-85de-85aa26aacb2b 54155456326c45d8b04d2cc748cac4b1 a421a2228c5b482197ddfa633ea50690 - - default default] Lock "0321c65d-e38f-4479-8c6e-d5bc3fcf809e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:38 np0005605476 nova_compute[239846]: 2026-02-02 17:48:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:38 np0005605476 nova_compute[239846]: 2026-02-02 17:48:38.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:48:38 np0005605476 nova_compute[239846]: 2026-02-02 17:48:38.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:48:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 395 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 609 KiB/s rd, 3.5 MiB/s wr, 204 op/s
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb  2 12:48:39 np0005605476 podman[252272]: 2026-02-02 17:48:39.595649294 +0000 UTC m=+0.044362346 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 12:48:39 np0005605476 nova_compute[239846]: 2026-02-02 17:48:39.659 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:39 np0005605476 nova_compute[239846]: 2026-02-02 17:48:39.825 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97038391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97038391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037041638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037041638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.463 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.463 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.508 239853 DEBUG nova.objects.instance [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.529 239853 INFO nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.576 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.934 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.934 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:40 np0005605476 nova_compute[239846]: 2026-02-02 17:48:40.934 239853 INFO nova.compute.manager [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Attaching volume a3092a06-e1d3-4b42-bd2c-5414dac74057 to /dev/vdb#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.076 239853 DEBUG os_brick.utils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.078 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.089 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.090 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[010889dd-fa08-4a09-9237-29f32ba42955]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.091 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.098 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.098 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbe3bee-025a-485d-9264-6befef828c31]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.100 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.107 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.107 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3581f7-ff1a-4e33-b13c-3de66d0cc5c1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.108 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[74e274be-3171-467d-9912-d672a97cb7d0]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.109 239853 DEBUG oslo_concurrency.processutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.123 239853 DEBUG oslo_concurrency.processutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.125 239853 DEBUG os_brick.initiator.connectors.lightos [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.125 239853 DEBUG os_brick.initiator.connectors.lightos [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.125 239853 DEBUG os_brick.initiator.connectors.lightos [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.126 239853 DEBUG os_brick.utils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:48:41 np0005605476 nova_compute[239846]: 2026-02-02 17:48:41.126 239853 DEBUG nova.virt.block_device [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updating existing volume attachment record: b8ece670-5137-44f0-b84c-85789216b634 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:48:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 2 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 289 active+clean; 318 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.9 MiB/s wr, 243 op/s
Feb  2 12:48:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585059825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.006 239853 DEBUG nova.objects.instance [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.035 239853 DEBUG nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Attempting to attach volume a3092a06-e1d3-4b42-bd2c-5414dac74057 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.039 239853 DEBUG nova.virt.libvirt.guest [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a3092a06-e1d3-4b42-bd2c-5414dac74057">
Feb  2 12:48:42 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:48:42 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:48:42 np0005605476 nova_compute[239846]:  <serial>a3092a06-e1d3-4b42-bd2c-5414dac74057</serial>
Feb  2 12:48:42 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:48:42 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.139 239853 DEBUG nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.140 239853 DEBUG nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.140 239853 DEBUG nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.140 239853 DEBUG nova.virt.libvirt.driver [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No VIF found with MAC fa:16:3e:69:3c:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:48:42 np0005605476 nova_compute[239846]: 2026-02-02 17:48:42.329 239853 DEBUG oslo_concurrency.lockutils [None req-3cfa8963-eba5-451f-bf07-78fe929e36b3 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:42 np0005605476 podman[252319]: 2026-02-02 17:48:42.617643705 +0000 UTC m=+0.067959289 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 2 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 289 active+clean; 318 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 33 KiB/s wr, 152 op/s
Feb  2 12:48:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:44Z|00093|binding|INFO|Releasing lport c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb from this chassis (sb_readonly=0)
Feb  2 12:48:44 np0005605476 nova_compute[239846]: 2026-02-02 17:48:44.459 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:44 np0005605476 nova_compute[239846]: 2026-02-02 17:48:44.661 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb  2 12:48:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb  2 12:48:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb  2 12:48:44 np0005605476 nova_compute[239846]: 2026-02-02 17:48:44.827 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb  2 12:48:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb  2 12:48:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb  2 12:48:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 34 KiB/s wr, 167 op/s
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.517 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.518 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.531 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.611 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.612 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.621 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.622 239853 INFO nova.compute.claims [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:48:45 np0005605476 nova_compute[239846]: 2026-02-02 17:48:45.730 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb  2 12:48:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb  2 12:48:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb  2 12:48:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628517366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.276 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.282 239853 DEBUG nova.compute.provider_tree [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.296 239853 DEBUG nova.scheduler.client.report [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.316 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.317 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.375 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.376 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.399 239853 INFO nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.419 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.516 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.518 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.519 239853 INFO nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Creating image(s)#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.552 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.574 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.598 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.601 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:46.638 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:46.639 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:46.640 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.656 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.657 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.657 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.658 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.682 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.686 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.729 239853 DEBUG nova.policy [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b2b7987477543268373aac3ffda0c37', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.893 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:46 np0005605476 nova_compute[239846]: 2026-02-02 17:48:46.944 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] resizing rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.041 239853 DEBUG nova.objects.instance [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.058 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.059 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Ensure instance console log exists: /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.060 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.060 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.061 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 5.5 KiB/s wr, 68 op/s
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613625091415617 of space, bias 1.0, pg target 0.22840875274246852 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006974245364400351 of space, bias 1.0, pg target 0.20922736093201053 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.218353782197247e-07 of space, bias 1.0, pg target 0.0002465506134659174 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659593387779379 of space, bias 1.0, pg target 0.1997878016333814 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.01967141087527e-06 of space, bias 4.0, pg target 0.0012236056930503242 quantized to 16 (current 16)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:48:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:48:47 np0005605476 nova_compute[239846]: 2026-02-02 17:48:47.819 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Successfully created port: 07fd1022-7037-4a03-8c56-737464703551 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:48:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb  2 12:48:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb  2 12:48:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb  2 12:48:48 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:48Z|00094|binding|INFO|Releasing lport c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb from this chassis (sb_readonly=0)
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.822 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.909 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Successfully updated port: 07fd1022-7037-4a03-8c56-737464703551 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.923 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.923 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquired lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.923 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.994 239853 DEBUG nova.compute.manager [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-changed-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.994 239853 DEBUG nova.compute.manager [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Refreshing instance network info cache due to event network-changed-07fd1022-7037-4a03-8c56-737464703551. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:48 np0005605476 nova_compute[239846]: 2026-02-02 17:48:48.995 239853 DEBUG oslo_concurrency.lockutils [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb  2 12:48:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb  2 12:48:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.098 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:48:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 230 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.664 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.802 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054514.800502, 0321c65d-e38f-4479-8c6e-d5bc3fcf809e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.802 239853 INFO nova.compute.manager [-] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.829 239853 DEBUG nova.compute.manager [None req-dd2bb884-e030-44ee-97c8-c3234799359e - - - - - -] [instance: 0321c65d-e38f-4479-8c6e-d5bc3fcf809e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:49 np0005605476 nova_compute[239846]: 2026-02-02 17:48:49.830 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb  2 12:48:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb  2 12:48:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.518 239853 DEBUG nova.network.neutron [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updating instance_info_cache with network_info: [{"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.538 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Releasing lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.539 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Instance network_info: |[{"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.540 239853 DEBUG oslo_concurrency.lockutils [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.540 239853 DEBUG nova.network.neutron [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Refreshing network info cache for port 07fd1022-7037-4a03-8c56-737464703551 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.546 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Start _get_guest_xml network_info=[{"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.553 239853 WARNING nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.565 239853 DEBUG nova.virt.libvirt.host [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.566 239853 DEBUG nova.virt.libvirt.host [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.570 239853 DEBUG nova.virt.libvirt.host [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.570 239853 DEBUG nova.virt.libvirt.host [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.571 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.571 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.572 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.572 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.573 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.573 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.573 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.574 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.574 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.574 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.575 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.575 239853 DEBUG nova.virt.hardware [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:48:50 np0005605476 nova_compute[239846]: 2026-02-02 17:48:50.580 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683065411' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.093 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.129 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.134 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 260 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 5.3 MiB/s wr, 183 op/s
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965532118' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.678 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.680 239853 DEBUG nova.virt.libvirt.vif [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-441057123',display_name='tempest-VolumesBackupsTest-instance-441057123',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-441057123',id=8,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNTCyGzGvgWBxW2biKNEkphRdb+/933KJloZq2c5+QHh0667htFhqdayfXzcKBdVt/9i5Q4P+p7ZcAAXnsFy6XQPvwjP47n4nw8+X/mzl+GON90vJUqVbTo46HKL78gj0A==',key_name='tempest-keypair-62621088',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-oi1y7ziq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8d00d4e2-c297-40a8-b6fe-9418b8da0b2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.680 239853 DEBUG nova.network.os_vif_util [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.681 239853 DEBUG nova.network.os_vif_util [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.682 239853 DEBUG nova.objects.instance [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.786 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <uuid>8d00d4e2-c297-40a8-b6fe-9418b8da0b2f</uuid>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <name>instance-00000008</name>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesBackupsTest-instance-441057123</nova:name>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:48:50</nova:creationTime>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:user uuid="7b2b7987477543268373aac3ffda0c37">tempest-VolumesBackupsTest-27790021-project-member</nova:user>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:project uuid="7ff6dfb8be334eeb94d13588a609b2bd">tempest-VolumesBackupsTest-27790021</nova:project>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <nova:port uuid="07fd1022-7037-4a03-8c56-737464703551">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="serial">8d00d4e2-c297-40a8-b6fe-9418b8da0b2f</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="uuid">8d00d4e2-c297-40a8-b6fe-9418b8da0b2f</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:00:f3:40"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <target dev="tap07fd1022-70"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/console.log" append="off"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:48:51 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:48:51 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:48:51 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:48:51 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.787 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Preparing to wait for external event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.787 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.787 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.788 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.788 239853 DEBUG nova.virt.libvirt.vif [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:48:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-441057123',display_name='tempest-VolumesBackupsTest-instance-441057123',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-441057123',id=8,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNTCyGzGvgWBxW2biKNEkphRdb+/933KJloZq2c5+QHh0667htFhqdayfXzcKBdVt/9i5Q4P+p7ZcAAXnsFy6XQPvwjP47n4nw8+X/mzl+GON90vJUqVbTo46HKL78gj0A==',key_name='tempest-keypair-62621088',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-oi1y7ziq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:48:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8d00d4e2-c297-40a8-b6fe-9418b8da0b2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.789 239853 DEBUG nova.network.os_vif_util [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.789 239853 DEBUG nova.network.os_vif_util [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.789 239853 DEBUG os_vif [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.790 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.790 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.790 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.793 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.793 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07fd1022-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.793 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07fd1022-70, col_values=(('external_ids', {'iface-id': '07fd1022-7037-4a03-8c56-737464703551', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:f3:40', 'vm-uuid': '8d00d4e2-c297-40a8-b6fe-9418b8da0b2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.794 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:51 np0005605476 NetworkManager[49022]: <info>  [1770054531.7955] manager: (tap07fd1022-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.797 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.799 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.800 239853 INFO os_vif [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70')#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.852 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.852 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.853 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No VIF found with MAC fa:16:3e:00:f3:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.853 239853 INFO nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Using config drive#033[00m
Feb  2 12:48:51 np0005605476 nova_compute[239846]: 2026-02-02 17:48:51.871 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb  2 12:48:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb  2 12:48:52 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.256 239853 INFO nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Creating config drive at /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.260 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplch2g6bs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.342 239853 DEBUG nova.network.neutron [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updated VIF entry in instance network info cache for port 07fd1022-7037-4a03-8c56-737464703551. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.343 239853 DEBUG nova.network.neutron [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updating instance_info_cache with network_info: [{"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.356 239853 DEBUG oslo_concurrency.lockutils [req-bb533b1d-de01-4477-b1b8-134b809850ab req-e34b7317-abb3-49e7-a447-8c0e8b7a92d2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.379 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplch2g6bs" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.401 239853 DEBUG nova.storage.rbd_utils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] rbd image 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.404 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.508 239853 DEBUG oslo_concurrency.processutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.509 239853 INFO nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Deleting local config drive /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f/disk.config because it was imported into RBD.#033[00m
Feb  2 12:48:52 np0005605476 kernel: tap07fd1022-70: entered promiscuous mode
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.5507] manager: (tap07fd1022-70): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.551 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:52Z|00095|binding|INFO|Claiming lport 07fd1022-7037-4a03-8c56-737464703551 for this chassis.
Feb  2 12:48:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:52Z|00096|binding|INFO|07fd1022-7037-4a03-8c56-737464703551: Claiming fa:16:3e:00:f3:40 10.100.0.6
Feb  2 12:48:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:52Z|00097|binding|INFO|Setting lport 07fd1022-7037-4a03-8c56-737464703551 ovn-installed in OVS
Feb  2 12:48:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:52Z|00098|binding|INFO|Setting lport 07fd1022-7037-4a03-8c56-737464703551 up in Southbound
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.561 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f3:40 10.100.0.6'], port_security=['fa:16:3e:00:f3:40 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d00d4e2-c297-40a8-b6fe-9418b8da0b2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962ccc49-6579-46f5-b577-7995d4fef976', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a28a626-93bb-44f5-9e6f-8b218f41aeb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58e5e8fa-47da-4a70-b729-f06398e2ea5a, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=07fd1022-7037-4a03-8c56-737464703551) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.561 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.564 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 07fd1022-7037-4a03-8c56-737464703551 in datapath 962ccc49-6579-46f5-b577-7995d4fef976 bound to our chassis#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.566 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962ccc49-6579-46f5-b577-7995d4fef976#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.575 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d15f2024-c6a4-46b3-aab0-6dcfa958ad0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.576 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962ccc49-61 in ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:48:52 np0005605476 systemd-machined[208080]: New machine qemu-8-instance-00000008.
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.578 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962ccc49-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.578 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c091b033-d169-4eb0-986b-08358b76a88b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.579 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba033f4-1f3e-40af-aa3a-5930c91b9fc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.589 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[f55b1b28-a683-44b7-a0e9-2fcb2c836f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 systemd-udevd[252670]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.6035] device (tap07fd1022-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.6039] device (tap07fd1022-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.608 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[62a0b184-1b49-4321-9565-9e1170cfbbf1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.631 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[efbd7cdb-8d85-4f43-b583-287e2048065a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.636 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2771b474-276e-4283-bba7-8fffcfc24037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.6375] manager: (tap962ccc49-60): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Feb  2 12:48:52 np0005605476 systemd-udevd[252672]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.656 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f92d022b-9057-407b-9961-0d074f9fbc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.658 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[062d910b-7650-4a74-80af-b1feaa9be6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.6725] device (tap962ccc49-60): carrier: link connected
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.674 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[71e8ca8e-ece7-4b66-a829-31a9204adff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.687 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[869fcd9d-3717-4200-ad9c-e12cd0125d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962ccc49-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:57:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377400, 'reachable_time': 38752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252700, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.696 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[71f5da3e-ac52-422a-8d27-fb4664d59a8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:5785'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377400, 'tstamp': 377400}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252701, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.713 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbe3301-f145-46a8-acf0-e9d6ec57e3c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962ccc49-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:57:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377400, 'reachable_time': 38752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252702, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.733 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8fa209-73be-4661-91b6-af5131ba5bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.774 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[22b345ad-f336-4e43-9b6e-f6aabd366bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.775 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962ccc49-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.775 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.776 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962ccc49-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:52 np0005605476 NetworkManager[49022]: <info>  [1770054532.7785] manager: (tap962ccc49-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb  2 12:48:52 np0005605476 kernel: tap962ccc49-60: entered promiscuous mode
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.777 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.780 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962ccc49-60, col_values=(('external_ids', {'iface-id': '7ef9b558-600a-49d5-9b00-0242ee1bfb90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.781 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:52Z|00099|binding|INFO|Releasing lport 7ef9b558-600a-49d5-9b00-0242ee1bfb90 from this chassis (sb_readonly=0)
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.784 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.785 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4af50a-24d0-4b98-9224-d9034d2c025b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.785 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-962ccc49-6579-46f5-b577-7995d4fef976
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/962ccc49-6579-46f5-b577-7995d4fef976.pid.haproxy
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 962ccc49-6579-46f5-b577-7995d4fef976
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:48:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:52.786 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'env', 'PROCESS_TAG=haproxy-962ccc49-6579-46f5-b577-7995d4fef976', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962ccc49-6579-46f5-b577-7995d4fef976.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.788 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.797 239853 DEBUG nova.compute.manager [req-229c46b8-a64e-4c9f-92e8-f9aed96f0faf req-72bfaf51-970d-457e-a79e-39d5f06ce2b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.797 239853 DEBUG oslo_concurrency.lockutils [req-229c46b8-a64e-4c9f-92e8-f9aed96f0faf req-72bfaf51-970d-457e-a79e-39d5f06ce2b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.798 239853 DEBUG oslo_concurrency.lockutils [req-229c46b8-a64e-4c9f-92e8-f9aed96f0faf req-72bfaf51-970d-457e-a79e-39d5f06ce2b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.798 239853 DEBUG oslo_concurrency.lockutils [req-229c46b8-a64e-4c9f-92e8-f9aed96f0faf req-72bfaf51-970d-457e-a79e-39d5f06ce2b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:52 np0005605476 nova_compute[239846]: 2026-02-02 17:48:52.798 239853 DEBUG nova.compute.manager [req-229c46b8-a64e-4c9f-92e8-f9aed96f0faf req-72bfaf51-970d-457e-a79e-39d5f06ce2b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Processing event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.057 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054533.057471, 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.058 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] VM Started (Lifecycle Event)#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.060 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.062 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.065 239853 INFO nova.virt.libvirt.driver [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Instance spawned successfully.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.065 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.081 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.084 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:53 np0005605476 podman[252776]: 2026-02-02 17:48:53.090205837 +0000 UTC m=+0.047318969 container create 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.091 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.091 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.092 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.092 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.092 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.093 239853 DEBUG nova.virt.libvirt.driver [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:48:53 np0005605476 systemd[1]: Started libpod-conmon-58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29.scope.
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.125 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.127 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054533.0599413, 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.128 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:48:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:48:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2507fe5a29bc482580b96a13dae67f7d8b04fe857cd4302c4bf15848bb5bdba7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.147 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:53 np0005605476 podman[252776]: 2026-02-02 17:48:53.149516251 +0000 UTC m=+0.106629403 container init 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.151 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054533.0622094, 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.151 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.154 239853 INFO nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Took 6.64 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.154 239853 DEBUG nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:53 np0005605476 podman[252776]: 2026-02-02 17:48:53.154721177 +0000 UTC m=+0.111834299 container start 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:48:53 np0005605476 podman[252776]: 2026-02-02 17:48:53.062342175 +0000 UTC m=+0.019455347 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [NOTICE]   (252796) : New worker (252798) forked
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [NOTICE]   (252796) : Loading success.
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.179 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.183 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.218 239853 INFO nova.compute.manager [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Took 7.65 seconds to build instance.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.234 239853 DEBUG oslo_concurrency.lockutils [None req-024b6ed2-1345-46f9-af2f-ad3368e833c4 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.257 239853 DEBUG oslo_concurrency.lockutils [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.258 239853 DEBUG oslo_concurrency.lockutils [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.270 239853 INFO nova.compute.manager [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Detaching volume a3092a06-e1d3-4b42-bd2c-5414dac74057#033[00m
Feb  2 12:48:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 260 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.1 MiB/s wr, 132 op/s
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.459 239853 INFO nova.virt.block_device [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Attempting to driver detach volume a3092a06-e1d3-4b42-bd2c-5414dac74057 from mountpoint /dev/vdb#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.468 239853 DEBUG nova.virt.libvirt.driver [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Attempting to detach device vdb from instance f386639b-0601-4234-b5b2-2c91952427d4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.468 239853 DEBUG nova.virt.libvirt.guest [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a3092a06-e1d3-4b42-bd2c-5414dac74057">
Feb  2 12:48:53 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <serial>a3092a06-e1d3-4b42-bd2c-5414dac74057</serial>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:48:53 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.476 239853 INFO nova.virt.libvirt.driver [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully detached device vdb from instance f386639b-0601-4234-b5b2-2c91952427d4 from the persistent domain config.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.476 239853 DEBUG nova.virt.libvirt.driver [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f386639b-0601-4234-b5b2-2c91952427d4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.477 239853 DEBUG nova.virt.libvirt.guest [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a3092a06-e1d3-4b42-bd2c-5414dac74057">
Feb  2 12:48:53 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <serial>a3092a06-e1d3-4b42-bd2c-5414dac74057</serial>
Feb  2 12:48:53 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:48:53 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:48:53 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.501 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.528 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054533.5277975, f386639b-0601-4234-b5b2-2c91952427d4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.529 239853 DEBUG nova.virt.libvirt.driver [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f386639b-0601-4234-b5b2-2c91952427d4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.532 239853 INFO nova.virt.libvirt.driver [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully detached device vdb from instance f386639b-0601-4234-b5b2-2c91952427d4 from the live domain config.#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.691 239853 DEBUG nova.objects.instance [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.771 239853 DEBUG oslo_concurrency.lockutils [None req-ef86722d-92a0-4e79-89a3-529a05a11efe e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.772 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.772 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.772 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.773 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.774 239853 INFO nova.compute.manager [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Terminating instance#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.775 239853 DEBUG nova.compute.manager [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:48:53 np0005605476 kernel: tap6ab68bc5-61 (unregistering): left promiscuous mode
Feb  2 12:48:53 np0005605476 NetworkManager[49022]: <info>  [1770054533.8190] device (tap6ab68bc5-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:48:53 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:53Z|00100|binding|INFO|Releasing lport 6ab68bc5-611f-4eb0-b660-c813917142b8 from this chassis (sb_readonly=0)
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.825 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:53 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:53Z|00101|binding|INFO|Setting lport 6ab68bc5-611f-4eb0-b660-c813917142b8 down in Southbound
Feb  2 12:48:53 np0005605476 ovn_controller[146041]: 2026-02-02T17:48:53Z|00102|binding|INFO|Removing iface tap6ab68bc5-61 ovn-installed in OVS
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.827 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:53.832 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:3c:81 10.100.0.13'], port_security=['fa:16:3e:69:3c:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f386639b-0601-4234-b5b2-2c91952427d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b00a155c-f468-43b5-8966-400475f07a2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7308365-475e-42ee-aa15-1cfc5e7f4d4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be054af0-a896-42b9-84a2-8460e7163b78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=6ab68bc5-611f-4eb0-b660-c813917142b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:48:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:53.833 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 6ab68bc5-611f-4eb0-b660-c813917142b8 in datapath b00a155c-f468-43b5-8966-400475f07a2d unbound from our chassis#033[00m
Feb  2 12:48:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:53.834 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b00a155c-f468-43b5-8966-400475f07a2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:48:53 np0005605476 nova_compute[239846]: 2026-02-02 17:48:53.834 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:53.835 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a21108b3-56bf-4a3f-8957-405ddf1668a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:53.835 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d namespace which is not needed anymore#033[00m
Feb  2 12:48:53 np0005605476 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb  2 12:48:53 np0005605476 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.554s CPU time.
Feb  2 12:48:53 np0005605476 systemd-machined[208080]: Machine qemu-7-instance-00000007 terminated.
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [NOTICE]   (252072) : haproxy version is 2.8.14-c23fe91
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [NOTICE]   (252072) : path to executable is /usr/sbin/haproxy
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [WARNING]  (252072) : Exiting Master process...
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [ALERT]    (252072) : Current worker (252081) exited with code 143 (Terminated)
Feb  2 12:48:53 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[252050]: [WARNING]  (252072) : All workers exited. Exiting... (0)
Feb  2 12:48:53 np0005605476 systemd[1]: libpod-934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1.scope: Deactivated successfully.
Feb  2 12:48:53 np0005605476 podman[252831]: 2026-02-02 17:48:53.956877269 +0000 UTC m=+0.046866366 container died 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 12:48:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1-userdata-shm.mount: Deactivated successfully.
Feb  2 12:48:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9b08cf085516fa4282a50f5558301c7ec58fcfb437cef072428b863f14a7a441-merged.mount: Deactivated successfully.
Feb  2 12:48:53 np0005605476 podman[252831]: 2026-02-02 17:48:53.997982973 +0000 UTC m=+0.087972110 container cleanup 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:48:54 np0005605476 systemd[1]: libpod-conmon-934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1.scope: Deactivated successfully.
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.011 239853 INFO nova.virt.libvirt.driver [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Instance destroyed successfully.#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.014 239853 DEBUG nova.objects.instance [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'resources' on Instance uuid f386639b-0601-4234-b5b2-2c91952427d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.037 239853 DEBUG nova.virt.libvirt.vif [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-75956878',display_name='tempest-VolumesSnapshotTestJSON-instance-75956878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-75956878',id=7,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8wjp+qPXwD4f81w+HED91hMzHd3E1i2CygQgcgjVyWX/dpxgK3Z22b8YQcDjt960It4Qgk4Vv9OcKZnbt0CjDMpqynug2JK/j/lDIHHq5f6XBLWQJJQbbAzZ7fX7z2og==',key_name='tempest-keypair-1157652695',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-y5az1t0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=f386639b-0601-4234-b5b2-2c91952427d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.037 239853 DEBUG nova.network.os_vif_util [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "6ab68bc5-611f-4eb0-b660-c813917142b8", "address": "fa:16:3e:69:3c:81", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ab68bc5-61", "ovs_interfaceid": "6ab68bc5-611f-4eb0-b660-c813917142b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.038 239853 DEBUG nova.network.os_vif_util [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.039 239853 DEBUG os_vif [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.041 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.041 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ab68bc5-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.043 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.046 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.048 239853 INFO os_vif [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:3c:81,bridge_name='br-int',has_traffic_filtering=True,id=6ab68bc5-611f-4eb0-b660-c813917142b8,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ab68bc5-61')#033[00m
Feb  2 12:48:54 np0005605476 podman[252869]: 2026-02-02 17:48:54.067487794 +0000 UTC m=+0.051571619 container remove 934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.071 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[96cf1a1c-5b0d-4b06-9f95-b6af64811bda]: (4, ('Mon Feb  2 05:48:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d (934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1)\n934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1\nMon Feb  2 05:48:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d (934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1)\n934a7d533091d972f3ae070ab21a8e644495d6e11639f7dea7ecbc5f047578c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.073 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce2a798-ba52-40b7-b341-5af4cdc816c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.073 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb00a155c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.075 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 kernel: tapb00a155c-f0: left promiscuous mode
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.078 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.079 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[13a4ee6b-0592-4548-843d-df20bf543567]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.084 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.092 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f0692af6-a7d0-4a35-ab13-d25743159838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.094 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[12ef2318-0f3d-44ef-b155-deec0d691787]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.113 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0f31eb22-8c2d-4891-8128-8464ee07e034]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374230, 'reachable_time': 27177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252903, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 systemd[1]: run-netns-ovnmeta\x2db00a155c\x2df468\x2d43b5\x2d8966\x2d400475f07a2d.mount: Deactivated successfully.
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.118 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:48:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:48:54.118 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a88d0c-494f-41b2-8cdc-b64ba63b9c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.139 239853 DEBUG nova.compute.manager [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-unplugged-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.140 239853 DEBUG oslo_concurrency.lockutils [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.140 239853 DEBUG oslo_concurrency.lockutils [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.140 239853 DEBUG oslo_concurrency.lockutils [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.141 239853 DEBUG nova.compute.manager [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] No waiting events found dispatching network-vif-unplugged-6ab68bc5-611f-4eb0-b660-c813917142b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.141 239853 DEBUG nova.compute.manager [req-7d5bf247-19dc-45e0-b19b-40ed247a06e6 req-efe1c5e2-e286-435d-bd5d-33547e8f8fc9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-unplugged-6ab68bc5-611f-4eb0-b660-c813917142b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.301 239853 INFO nova.virt.libvirt.driver [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Deleting instance files /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4_del#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.302 239853 INFO nova.virt.libvirt.driver [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Deletion of /var/lib/nova/instances/f386639b-0601-4234-b5b2-2c91952427d4_del complete#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.351 239853 INFO nova.compute.manager [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.352 239853 DEBUG oslo.service.loopingcall [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.352 239853 DEBUG nova.compute.manager [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.353 239853 DEBUG nova.network.neutron [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.666 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.914 239853 DEBUG nova.compute.manager [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.914 239853 DEBUG oslo_concurrency.lockutils [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.915 239853 DEBUG oslo_concurrency.lockutils [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.915 239853 DEBUG oslo_concurrency.lockutils [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.916 239853 DEBUG nova.compute.manager [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] No waiting events found dispatching network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:54 np0005605476 nova_compute[239846]: 2026-02-02 17:48:54.916 239853 WARNING nova.compute.manager [req-bcc625a7-4cd8-4eb1-ac5b-8493eef43855 req-9c05fb28-4b73-43d4-83e0-6436a1829c8f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received unexpected event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:48:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:48:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 237 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 197 op/s
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.135 239853 DEBUG nova.network.neutron [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.154 239853 INFO nova.compute.manager [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Took 1.80 seconds to deallocate network for instance.#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.216 239853 DEBUG nova.compute.manager [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.217 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "f386639b-0601-4234-b5b2-2c91952427d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.218 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.218 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.219 239853 DEBUG nova.compute.manager [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] No waiting events found dispatching network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.219 239853 WARNING nova.compute.manager [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received unexpected event network-vif-plugged-6ab68bc5-611f-4eb0-b660-c813917142b8 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.219 239853 DEBUG nova.compute.manager [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-changed-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.220 239853 DEBUG nova.compute.manager [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Refreshing instance network info cache due to event network-changed-07fd1022-7037-4a03-8c56-737464703551. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.220 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.220 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.221 239853 DEBUG nova.network.neutron [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Refreshing network info cache for port 07fd1022-7037-4a03-8c56-737464703551 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.273 239853 WARNING nova.volume.cinder [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Attachment b8ece670-5137-44f0-b84c-85789216b634 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = b8ece670-5137-44f0-b84c-85789216b634. (HTTP 404) (Request-ID: req-2f76d58a-aab5-4644-9d37-8002c47e79d9)#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.274 239853 INFO nova.compute.manager [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Took 0.12 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.317 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.318 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.374 239853 DEBUG oslo_concurrency.processutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:48:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:48:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269495824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.937 239853 DEBUG oslo_concurrency.processutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.946 239853 DEBUG nova.compute.provider_tree [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.968 239853 DEBUG nova.scheduler.client.report [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:48:56 np0005605476 nova_compute[239846]: 2026-02-02 17:48:56.993 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.005 239853 DEBUG nova.compute.manager [req-8bdbaee0-7326-467c-b065-79ea6590ddec req-5df0f89e-f76c-46d0-bfcd-c7dae544a0e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Received event network-vif-deleted-6ab68bc5-611f-4eb0-b660-c813917142b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.016 239853 INFO nova.scheduler.client.report [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Deleted allocations for instance f386639b-0601-4234-b5b2-2c91952427d4#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.084 239853 DEBUG oslo_concurrency.lockutils [None req-1d32d522-72b7-4754-84a1-26d5e969ae9c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "f386639b-0601-4234-b5b2-2c91952427d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.350 239853 DEBUG nova.network.neutron [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updated VIF entry in instance network info cache for port 07fd1022-7037-4a03-8c56-737464703551. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.351 239853 DEBUG nova.network.neutron [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updating instance_info_cache with network_info: [{"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:48:57 np0005605476 nova_compute[239846]: 2026-02-02 17:48:57.368 239853 DEBUG oslo_concurrency.lockutils [req-b2ec9e03-8f82-4cb6-9c7b-09abfc658c88 req-08eaa578-8511-4dc5-b915-168b46f02328 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:48:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:48:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1416639174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:48:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 203 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 921 KiB/s wr, 206 op/s
Feb  2 12:48:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb  2 12:48:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb  2 12:48:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb  2 12:48:59 np0005605476 nova_compute[239846]: 2026-02-02 17:48:59.043 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb  2 12:48:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 180 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 27 KiB/s wr, 194 op/s
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309191581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:48:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309191581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:48:59 np0005605476 nova_compute[239846]: 2026-02-02 17:48:59.668 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb  2 12:49:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb  2 12:49:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb  2 12:49:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb  2 12:49:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb  2 12:49:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb  2 12:49:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 142 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.0 KiB/s wr, 160 op/s
Feb  2 12:49:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb  2 12:49:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb  2 12:49:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb  2 12:49:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 142 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 4.7 KiB/s wr, 77 op/s
Feb  2 12:49:04 np0005605476 nova_compute[239846]: 2026-02-02 17:49:04.045 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb  2 12:49:04 np0005605476 nova_compute[239846]: 2026-02-02 17:49:04.670 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2257854812' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1867653740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1867653740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb  2 12:49:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb  2 12:49:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb  2 12:49:05 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:05Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:f3:40 10.100.0.6
Feb  2 12:49:05 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:05Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:f3:40 10.100.0.6
Feb  2 12:49:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 147 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 2.7 MiB/s wr, 166 op/s
Feb  2 12:49:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb  2 12:49:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb  2 12:49:06 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 159 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 767 KiB/s rd, 4.8 MiB/s wr, 228 op/s
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb  2 12:49:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb  2 12:49:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.859 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.860 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.879 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.965 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.966 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.974 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:49:08 np0005605476 nova_compute[239846]: 2026-02-02 17:49:08.975 239853 INFO nova.compute.claims [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.004 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054534.0034502, f386639b-0601-4234-b5b2-2c91952427d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.005 239853 INFO nova.compute.manager [-] [instance: f386639b-0601-4234-b5b2-2c91952427d4] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.046 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.050 239853 DEBUG nova.compute.manager [None req-8a1f3c27-b556-4f22-ba35-2e03fb780bbc - - - - - -] [instance: f386639b-0601-4234-b5b2-2c91952427d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.129 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 570 KiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb  2 12:49:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:49:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282852648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.713 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.726 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.731 239853 DEBUG nova.compute.provider_tree [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.749 239853 DEBUG nova.scheduler.client.report [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.780 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.781 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.846 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.847 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.877 239853 INFO nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:49:09 np0005605476 nova_compute[239846]: 2026-02-02 17:49:09.893 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.007 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.009 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.009 239853 INFO nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Creating image(s)#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.032 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.051 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.081 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.084 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.139 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.140 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.141 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.141 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.163 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.167 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 a655d235-b578-4696-84d1-169799ca8ec5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.187 239853 DEBUG nova.policy [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5e6162e875a40d7b58553a223857aa3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.385 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 a655d235-b578-4696-84d1-169799ca8ec5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.436 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] resizing rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.499 239853 DEBUG nova.objects.instance [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'migration_context' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.511 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.511 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Ensure instance console log exists: /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.511 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.512 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:10 np0005605476 nova_compute[239846]: 2026-02-02 17:49:10.512 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:10 np0005605476 podman[253116]: 2026-02-02 17:49:10.595850277 +0000 UTC m=+0.049710747 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4225565181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.099 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.099 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.119 239853 DEBUG nova.objects.instance [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.133 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Successfully created port: 727b7d70-b88e-4a8a-b74b-73820685a938 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.136 239853 INFO nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.146 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb  2 12:49:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb  2 12:49:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.365 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.365 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.366 239853 INFO nova.compute.manager [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Attaching volume 91799fd6-1cae-401b-9546-b25a8f483f08 to /dev/vdb#033[00m
Feb  2 12:49:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 190 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.519 239853 DEBUG os_brick.utils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.521 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.532 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.532 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[fd88927b-c88b-4f69-869e-7a998d82d6cf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.534 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.541 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.541 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[b25270e8-411b-4f17-95e5-22ac4ab17dcc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.542 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.548 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.548 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[96939221-787e-449e-a913-73e48577c6b9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.549 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[528b5a70-5de8-4e48-8fac-5b2dadfec3cb]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.549 239853 DEBUG oslo_concurrency.processutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.566 239853 DEBUG oslo_concurrency.processutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.569 239853 DEBUG os_brick.initiator.connectors.lightos [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.569 239853 DEBUG os_brick.initiator.connectors.lightos [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.569 239853 DEBUG os_brick.initiator.connectors.lightos [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.570 239853 DEBUG os_brick.utils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] <== get_connector_properties: return (50ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.570 239853 DEBUG nova.virt.block_device [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updating existing volume attachment record: 672b562a-10ed-45c2-961c-44f68e00c8a6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.819 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Successfully updated port: 727b7d70-b88e-4a8a-b74b-73820685a938 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.838 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.840 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquired lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.840 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.898 239853 DEBUG nova.compute.manager [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-changed-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.899 239853 DEBUG nova.compute.manager [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Refreshing instance network info cache due to event network-changed-727b7d70-b88e-4a8a-b74b-73820685a938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:49:11 np0005605476 nova_compute[239846]: 2026-02-02 17:49:11.899 239853 DEBUG oslo_concurrency.lockutils [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.096 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:49:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640614462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb  2 12:49:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb  2 12:49:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.350 239853 DEBUG nova.objects.instance [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.381 239853 DEBUG nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Attempting to attach volume 91799fd6-1cae-401b-9546-b25a8f483f08 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.383 239853 DEBUG nova.virt.libvirt.guest [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-91799fd6-1cae-401b-9546-b25a8f483f08">
Feb  2 12:49:12 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:49:12 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:12 np0005605476 nova_compute[239846]:  <serial>91799fd6-1cae-401b-9546-b25a8f483f08</serial>
Feb  2 12:49:12 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:12 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.476 239853 DEBUG nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.477 239853 DEBUG nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.477 239853 DEBUG nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.477 239853 DEBUG nova.virt.libvirt.driver [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] No VIF found with MAC fa:16:3e:00:f3:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.643 239853 DEBUG oslo_concurrency.lockutils [None req-636f6a92-6d60-4bbc-98c6-e01d74183dd3 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.765 239853 DEBUG nova.network.neutron [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updating instance_info_cache with network_info: [{"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.783 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Releasing lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.784 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Instance network_info: |[{"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.785 239853 DEBUG oslo_concurrency.lockutils [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.785 239853 DEBUG nova.network.neutron [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Refreshing network info cache for port 727b7d70-b88e-4a8a-b74b-73820685a938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.791 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Start _get_guest_xml network_info=[{"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.797 239853 WARNING nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.811 239853 DEBUG nova.virt.libvirt.host [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.812 239853 DEBUG nova.virt.libvirt.host [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.815 239853 DEBUG nova.virt.libvirt.host [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.816 239853 DEBUG nova.virt.libvirt.host [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.817 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.817 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.818 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.819 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.819 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.819 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.820 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.820 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.821 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.821 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.822 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.822 239853 DEBUG nova.virt.hardware [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:49:12 np0005605476 nova_compute[239846]: 2026-02-02 17:49:12.827 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824498687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.402 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 190 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.423 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.427 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:13 np0005605476 podman[253223]: 2026-02-02 17:49:13.656572752 +0000 UTC m=+0.105988605 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.749 239853 DEBUG nova.network.neutron [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updated VIF entry in instance network info cache for port 727b7d70-b88e-4a8a-b74b-73820685a938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.749 239853 DEBUG nova.network.neutron [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updating instance_info_cache with network_info: [{"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.762 239853 DEBUG oslo_concurrency.lockutils [req-0aeb3524-ce7c-4b71-9478-056769d72c01 req-cb538d89-fc93-4d08-92f4-f83301ee07f5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:49:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522055311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.899 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.900 239853 DEBUG nova.virt.libvirt.vif [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1422861368',display_name='tempest-VolumesSnapshotTestJSON-instance-1422861368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1422861368',id=9,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAadW9ykLaVsAIfNYDHzPuv1Na7Doa6R5vVeQXqUt7lUHuhwtCyoz3QzihZxzt2hJm+pPRqFvSpZruqVCfNz5jtWZaXln5ng4w9NzTpfw+dF+vvFINflO0q6xWAVj0/5BQ==',key_name='tempest-keypair-1118395760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-2t03ay2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:49:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=a655d235-b578-4696-84d1-169799ca8ec5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.900 239853 DEBUG nova.network.os_vif_util [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.901 239853 DEBUG nova.network.os_vif_util [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.902 239853 DEBUG nova.objects.instance [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'pci_devices' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.920 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <uuid>a655d235-b578-4696-84d1-169799ca8ec5</uuid>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <name>instance-00000009</name>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1422861368</nova:name>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:49:12</nova:creationTime>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:user uuid="e5e6162e875a40d7b58553a223857aa3">tempest-VolumesSnapshotTestJSON-2080120933-project-member</nova:user>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:project uuid="a06203a436464cf3968b3ecfc022e1dd">tempest-VolumesSnapshotTestJSON-2080120933</nova:project>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <nova:port uuid="727b7d70-b88e-4a8a-b74b-73820685a938">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="serial">a655d235-b578-4696-84d1-169799ca8ec5</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="uuid">a655d235-b578-4696-84d1-169799ca8ec5</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/a655d235-b578-4696-84d1-169799ca8ec5_disk">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/a655d235-b578-4696-84d1-169799ca8ec5_disk.config">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:f1:d4:05"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <target dev="tap727b7d70-b8"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/console.log" append="off"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:49:13 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:49:13 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:49:13 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:49:13 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.921 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Preparing to wait for external event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.921 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.922 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.923 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.924 239853 DEBUG nova.virt.libvirt.vif [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1422861368',display_name='tempest-VolumesSnapshotTestJSON-instance-1422861368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1422861368',id=9,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAadW9ykLaVsAIfNYDHzPuv1Na7Doa6R5vVeQXqUt7lUHuhwtCyoz3QzihZxzt2hJm+pPRqFvSpZruqVCfNz5jtWZaXln5ng4w9NzTpfw+dF+vvFINflO0q6xWAVj0/5BQ==',key_name='tempest-keypair-1118395760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-2t03ay2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:49:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=a655d235-b578-4696-84d1-169799ca8ec5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.925 239853 DEBUG nova.network.os_vif_util [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.928 239853 DEBUG nova.network.os_vif_util [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.929 239853 DEBUG os_vif [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.931 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.932 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.933 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.936 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap727b7d70-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.936 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap727b7d70-b8, col_values=(('external_ids', {'iface-id': '727b7d70-b88e-4a8a-b74b-73820685a938', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f1:d4:05', 'vm-uuid': 'a655d235-b578-4696-84d1-169799ca8ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:13 np0005605476 NetworkManager[49022]: <info>  [1770054553.9385] manager: (tap727b7d70-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.943 239853 INFO os_vif [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8')#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.992 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.992 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.992 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No VIF found with MAC fa:16:3e:f1:d4:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:49:13 np0005605476 nova_compute[239846]: 2026-02-02 17:49:13.993 239853 INFO nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Using config drive#033[00m
Feb  2 12:49:14 np0005605476 nova_compute[239846]: 2026-02-02 17:49:14.018 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/440606054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4139749859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:14 np0005605476 nova_compute[239846]: 2026-02-02 17:49:14.715 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:14 np0005605476 nova_compute[239846]: 2026-02-02 17:49:14.965 239853 INFO nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Creating config drive at /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config#033[00m
Feb  2 12:49:14 np0005605476 nova_compute[239846]: 2026-02-02 17:49:14.970 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8hpk7ctz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb  2 12:49:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb  2 12:49:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.093 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8hpk7ctz" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.120 239853 DEBUG nova.storage.rbd_utils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] rbd image a655d235-b578-4696-84d1-169799ca8ec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.124 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config a655d235-b578-4696-84d1-169799ca8ec5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.244 239853 DEBUG oslo_concurrency.processutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config a655d235-b578-4696-84d1-169799ca8ec5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.245 239853 INFO nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Deleting local config drive /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5/disk.config because it was imported into RBD.#033[00m
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.2807] manager: (tap727b7d70-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Feb  2 12:49:15 np0005605476 kernel: tap727b7d70-b8: entered promiscuous mode
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.316 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:15Z|00103|binding|INFO|Claiming lport 727b7d70-b88e-4a8a-b74b-73820685a938 for this chassis.
Feb  2 12:49:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:15Z|00104|binding|INFO|727b7d70-b88e-4a8a-b74b-73820685a938: Claiming fa:16:3e:f1:d4:05 10.100.0.3
Feb  2 12:49:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:15Z|00105|binding|INFO|Setting lport 727b7d70-b88e-4a8a-b74b-73820685a938 ovn-installed in OVS
Feb  2 12:49:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:15Z|00106|binding|INFO|Setting lport 727b7d70-b88e-4a8a-b74b-73820685a938 up in Southbound
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.324 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:d4:05 10.100.0.3'], port_security=['fa:16:3e:f1:d4:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a655d235-b578-4696-84d1-169799ca8ec5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b00a155c-f468-43b5-8966-400475f07a2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '074795d7-6af3-42de-aea8-6dbfcd1ba557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be054af0-a896-42b9-84a2-8460e7163b78, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=727b7d70-b88e-4a8a-b74b-73820685a938) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.325 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 727b7d70-b88e-4a8a-b74b-73820685a938 in datapath b00a155c-f468-43b5-8966-400475f07a2d bound to our chassis#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.326 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b00a155c-f468-43b5-8966-400475f07a2d#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.325 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 systemd-machined[208080]: New machine qemu-9-instance-00000009.
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.336 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf1d65c-2f4a-41db-8bbe-d87842d4190d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.336 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb00a155c-f1 in ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.338 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb00a155c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.338 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e572bf-f56e-4bf6-a5ce-8cb167c297db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.338 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6564e2a4-7bfa-4bdc-8355-6bc5fb199976]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.346 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[ff02bb9e-4524-4b10-9213-6242697c09aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb  2 12:49:15 np0005605476 systemd-udevd[253329]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.357 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ee5059-105c-435e-bc11-7fe350db50a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.3624] device (tap727b7d70-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.3630] device (tap727b7d70-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.375 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1046759b-0651-430b-8d56-41a2099378c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.3799] manager: (tapb00a155c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Feb  2 12:49:15 np0005605476 systemd-udevd[253332]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.379 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[91c98628-6682-4577-93a0-4d1f240db0f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.403 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f313a772-1ccf-4b7e-a5e0-9de0652c5230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.405 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[183fa492-bf55-492d-a506-a71bcf47c72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.4193] device (tapb00a155c-f0): carrier: link connected
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.422 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[7573d7d2-8907-4b68-9eb2-bd98aa247cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 4.0 MiB/s wr, 111 op/s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.435 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[eab23e95-5a54-4ffe-a48f-4aed3bc3755b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb00a155c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:40:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379675, 'reachable_time': 38280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253359, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.449 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3555633-ca2e-40e5-9b2a-1706e3494815]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:40a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 379675, 'tstamp': 379675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253360, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.462 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b953f8c8-53ce-4d83-95d8-7c18e9e567a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb00a155c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:40:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379675, 'reachable_time': 38280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253361, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.483 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ea87fc-fd71-4094-b074-c0179e8a66b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.529 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e4985409-114f-4aa7-902d-e38e54f8f40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.532 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb00a155c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.532 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.532 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb00a155c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.534 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 NetworkManager[49022]: <info>  [1770054555.5352] manager: (tapb00a155c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Feb  2 12:49:15 np0005605476 kernel: tapb00a155c-f0: entered promiscuous mode
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.537 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.538 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb00a155c-f0, col_values=(('external_ids', {'iface-id': 'c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.539 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:15Z|00107|binding|INFO|Releasing lport c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb from this chassis (sb_readonly=0)
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.548 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.547 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.550 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[19fe968d-fbb8-4391-ab48-58782cd74ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.551 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-b00a155c-f468-43b5-8966-400475f07a2d
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/b00a155c-f468-43b5-8966-400475f07a2d.pid.haproxy
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID b00a155c-f468-43b5-8966-400475f07a2d
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:49:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:15.553 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'env', 'PROCESS_TAG=haproxy-b00a155c-f468-43b5-8966-400475f07a2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b00a155c-f468-43b5-8966-400475f07a2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.698 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054555.6980226, a655d235-b578-4696-84d1-169799ca8ec5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.699 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] VM Started (Lifecycle Event)#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.717 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.723 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054555.698951, a655d235-b578-4696-84d1-169799ca8ec5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.723 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.737 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.740 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:49:15 np0005605476 nova_compute[239846]: 2026-02-02 17:49:15.756 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:49:15 np0005605476 podman[253435]: 2026-02-02 17:49:15.907295848 +0000 UTC m=+0.053039140 container create 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 12:49:15 np0005605476 systemd[1]: Started libpod-conmon-468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e.scope.
Feb  2 12:49:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c8bbcefa4a851871f56ad348c9cc75b48f62acf4b2c11bd5b67fcf7abe73e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:15 np0005605476 podman[253435]: 2026-02-02 17:49:15.967970581 +0000 UTC m=+0.113713883 container init 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:49:15 np0005605476 podman[253435]: 2026-02-02 17:49:15.880771003 +0000 UTC m=+0.026514385 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:49:15 np0005605476 podman[253435]: 2026-02-02 17:49:15.976343726 +0000 UTC m=+0.122087018 container start 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true)
Feb  2 12:49:16 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [NOTICE]   (253460) : New worker (253479) forked
Feb  2 12:49:16 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [NOTICE]   (253460) : Loading success.
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.253 239853 DEBUG nova.compute.manager [req-b525d938-c305-4559-8133-ba7018b50b46 req-28ec24ca-3070-4dd4-ae12-526fb3689fe9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.253 239853 DEBUG oslo_concurrency.lockutils [req-b525d938-c305-4559-8133-ba7018b50b46 req-28ec24ca-3070-4dd4-ae12-526fb3689fe9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.254 239853 DEBUG oslo_concurrency.lockutils [req-b525d938-c305-4559-8133-ba7018b50b46 req-28ec24ca-3070-4dd4-ae12-526fb3689fe9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.254 239853 DEBUG oslo_concurrency.lockutils [req-b525d938-c305-4559-8133-ba7018b50b46 req-28ec24ca-3070-4dd4-ae12-526fb3689fe9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.254 239853 DEBUG nova.compute.manager [req-b525d938-c305-4559-8133-ba7018b50b46 req-28ec24ca-3070-4dd4-ae12-526fb3689fe9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Processing event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.254 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.259 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054556.2589164, a655d235-b578-4696-84d1-169799ca8ec5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.259 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.260 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.264 239853 INFO nova.virt.libvirt.driver [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Instance spawned successfully.#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.264 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.282 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.288 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.290 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.290 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.291 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.291 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.291 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.292 239853 DEBUG nova.virt.libvirt.driver [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.319 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.349 239853 INFO nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Took 6.34 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.349 239853 DEBUG nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.402 239853 INFO nova.compute.manager [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Took 7.47 seconds to build instance.#033[00m
Feb  2 12:49:16 np0005605476 nova_compute[239846]: 2026-02-02 17:49:16.421 239853 DEBUG oslo_concurrency.lockutils [None req-0c7f6be4-c473-4b9d-a7c0-24219debb623 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:49:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:49:16 np0005605476 podman[253608]: 2026-02-02 17:49:16.995685212 +0000 UTC m=+0.042980547 container create 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:49:17 np0005605476 systemd[1]: Started libpod-conmon-12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e.scope.
Feb  2 12:49:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:17.055773818 +0000 UTC m=+0.103069153 container init 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:17.061857759 +0000 UTC m=+0.109153094 container start 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:17.065214353 +0000 UTC m=+0.112509708 container attach 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:49:17 np0005605476 sharp_torvalds[253624]: 167 167
Feb  2 12:49:17 np0005605476 systemd[1]: libpod-12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e.scope: Deactivated successfully.
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:17.066828919 +0000 UTC m=+0.114124254 container died 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:49:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:49:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:16.978077058 +0000 UTC m=+0.025372393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e45c99f540b4aa977487d2aa02e4f62541b91fbdf1d278b38b888fcbd62a85e6-merged.mount: Deactivated successfully.
Feb  2 12:49:17 np0005605476 podman[253608]: 2026-02-02 17:49:17.098656172 +0000 UTC m=+0.145951507 container remove 12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_torvalds, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:49:17 np0005605476 systemd[1]: libpod-conmon-12c47b1d32c7ee40a58b8ffa6146297ca682f0fb11b5c494645f938d37e2244e.scope: Deactivated successfully.
Feb  2 12:49:17 np0005605476 podman[253648]: 2026-02-02 17:49:17.23792092 +0000 UTC m=+0.036834395 container create 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:49:17 np0005605476 systemd[1]: Started libpod-conmon-9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8.scope.
Feb  2 12:49:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:17 np0005605476 podman[253648]: 2026-02-02 17:49:17.22010658 +0000 UTC m=+0.019020055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:17 np0005605476 podman[253648]: 2026-02-02 17:49:17.32557847 +0000 UTC m=+0.124491935 container init 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:49:17 np0005605476 podman[253648]: 2026-02-02 17:49:17.333514603 +0000 UTC m=+0.132428078 container start 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:49:17 np0005605476 podman[253648]: 2026-02-02 17:49:17.336673182 +0000 UTC m=+0.135586647 container attach 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:49:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Feb  2 12:49:17 np0005605476 heuristic_boyd[253664]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:49:17 np0005605476 heuristic_boyd[253664]: --> All data devices are unavailable
Feb  2 12:49:17 np0005605476 systemd[1]: libpod-9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8.scope: Deactivated successfully.
Feb  2 12:49:17 np0005605476 podman[253684]: 2026-02-02 17:49:17.859758482 +0000 UTC m=+0.034087558 container died 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:49:17 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2ed73b2a68f57f0d360e735f92b3be15bff17fb7ed46524cfd2cfdeb85e0a054-merged.mount: Deactivated successfully.
Feb  2 12:49:17 np0005605476 podman[253684]: 2026-02-02 17:49:17.900614008 +0000 UTC m=+0.074943114 container remove 9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:49:17 np0005605476 systemd[1]: libpod-conmon-9392472ecd230f354cab3f43077ab803c682ef0b15bda3f8bec930230b56b2d8.scope: Deactivated successfully.
Feb  2 12:49:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518001023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3563995147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.334869635 +0000 UTC m=+0.042528114 container create 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.359 239853 DEBUG nova.compute.manager [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.359 239853 DEBUG oslo_concurrency.lockutils [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.360 239853 DEBUG oslo_concurrency.lockutils [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.360 239853 DEBUG oslo_concurrency.lockutils [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.360 239853 DEBUG nova.compute.manager [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] No waiting events found dispatching network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.360 239853 WARNING nova.compute.manager [req-e3afaf24-fa57-4778-b76c-9f4b9736145a req-cee50638-8cca-4dd5-ab1f-16d1c813150f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received unexpected event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:49:18 np0005605476 systemd[1]: Started libpod-conmon-973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd.scope.
Feb  2 12:49:18 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.406321071 +0000 UTC m=+0.113979560 container init 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.410354514 +0000 UTC m=+0.118012993 container start 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.317619131 +0000 UTC m=+0.025277650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:18 np0005605476 ecstatic_cohen[253779]: 167 167
Feb  2 12:49:18 np0005605476 systemd[1]: libpod-973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd.scope: Deactivated successfully.
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.413211694 +0000 UTC m=+0.120870173 container attach 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:49:18 np0005605476 conmon[253779]: conmon 973a4743b14c087d584a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd.scope/container/memory.events
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.414344336 +0000 UTC m=+0.122002795 container died 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:49:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f0b571f0a5df00dee7764ebcae37ddb1d7c12ce3db7c4ce4c1bbc5caf566eaf6-merged.mount: Deactivated successfully.
Feb  2 12:49:18 np0005605476 podman[253762]: 2026-02-02 17:49:18.444169913 +0000 UTC m=+0.151828372 container remove 973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:49:18 np0005605476 systemd[1]: libpod-conmon-973a4743b14c087d584a01b31a5589f9b0f7653056fd8cd892659a4d51dd41dd.scope: Deactivated successfully.
Feb  2 12:49:18 np0005605476 podman[253803]: 2026-02-02 17:49:18.564452448 +0000 UTC m=+0.039028656 container create 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:49:18 np0005605476 systemd[1]: Started libpod-conmon-0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172.scope.
Feb  2 12:49:18 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8752a91993ab8800cb7fff1c16bf73d58aa197b32a96ed8a47f2cd4663af721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8752a91993ab8800cb7fff1c16bf73d58aa197b32a96ed8a47f2cd4663af721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8752a91993ab8800cb7fff1c16bf73d58aa197b32a96ed8a47f2cd4663af721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8752a91993ab8800cb7fff1c16bf73d58aa197b32a96ed8a47f2cd4663af721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:18 np0005605476 podman[253803]: 2026-02-02 17:49:18.547370949 +0000 UTC m=+0.021947177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:18 np0005605476 podman[253803]: 2026-02-02 17:49:18.651891482 +0000 UTC m=+0.126467710 container init 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:49:18 np0005605476 podman[253803]: 2026-02-02 17:49:18.65822809 +0000 UTC m=+0.132804298 container start 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:49:18 np0005605476 podman[253803]: 2026-02-02 17:49:18.661006498 +0000 UTC m=+0.135582716 container attach 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]: {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    "0": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "devices": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "/dev/loop3"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            ],
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_name": "ceph_lv0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_size": "21470642176",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "name": "ceph_lv0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "tags": {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_name": "ceph",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.crush_device_class": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.encrypted": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.objectstore": "bluestore",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_id": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.vdo": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.with_tpm": "0"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            },
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "vg_name": "ceph_vg0"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        }
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    ],
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    "1": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "devices": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "/dev/loop4"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            ],
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_name": "ceph_lv1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_size": "21470642176",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "name": "ceph_lv1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "tags": {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_name": "ceph",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.crush_device_class": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.encrypted": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.objectstore": "bluestore",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_id": "1",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.vdo": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.with_tpm": "0"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            },
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "vg_name": "ceph_vg1"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        }
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    ],
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    "2": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "devices": [
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "/dev/loop5"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            ],
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_name": "ceph_lv2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_size": "21470642176",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "name": "ceph_lv2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "tags": {
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.cluster_name": "ceph",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.crush_device_class": "",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.encrypted": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.objectstore": "bluestore",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osd_id": "2",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.vdo": "0",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:                "ceph.with_tpm": "0"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            },
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "type": "block",
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:            "vg_name": "ceph_vg2"
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:        }
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]:    ]
Feb  2 12:49:18 np0005605476 condescending_blackburn[253820]: }
Feb  2 12:49:18 np0005605476 systemd[1]: libpod-0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172.scope: Deactivated successfully.
Feb  2 12:49:18 np0005605476 nova_compute[239846]: 2026-02-02 17:49:18.938 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:18 np0005605476 podman[253829]: 2026-02-02 17:49:18.942533319 +0000 UTC m=+0.031709371 container died 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:49:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e8752a91993ab8800cb7fff1c16bf73d58aa197b32a96ed8a47f2cd4663af721-merged.mount: Deactivated successfully.
Feb  2 12:49:18 np0005605476 podman[253829]: 2026-02-02 17:49:18.979405304 +0000 UTC m=+0.068581316 container remove 0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:49:18 np0005605476 systemd[1]: libpod-conmon-0fd809b3cc6e82c7e2116687439fb22b5a2cb5445c73f6af0dcf23f998fe8172.scope: Deactivated successfully.
Feb  2 12:49:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb  2 12:49:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb  2 12:49:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.390554743 +0000 UTC m=+0.039635484 container create 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:49:19 np0005605476 systemd[1]: Started libpod-conmon-218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1.scope.
Feb  2 12:49:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 151 op/s
Feb  2 12:49:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.454339983 +0000 UTC m=+0.103420754 container init 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.461862814 +0000 UTC m=+0.110943555 container start 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:49:19 np0005605476 modest_benz[253922]: 167 167
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.465393013 +0000 UTC m=+0.114473754 container attach 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:49:19 np0005605476 systemd[1]: libpod-218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1.scope: Deactivated successfully.
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.466445942 +0000 UTC m=+0.115526683 container died 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.374916974 +0000 UTC m=+0.023997735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f1e5d95f8c90faba9e59de5c49058620e99a4ad45c9933fb2985bb8170cc5445-merged.mount: Deactivated successfully.
Feb  2 12:49:19 np0005605476 podman[253906]: 2026-02-02 17:49:19.499406357 +0000 UTC m=+0.148487098 container remove 218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:49:19 np0005605476 systemd[1]: libpod-conmon-218a731fb6dbf8c8c2672e440a6eade188cd7312485c3249bb8cead56dd770d1.scope: Deactivated successfully.
Feb  2 12:49:19 np0005605476 podman[253949]: 2026-02-02 17:49:19.650376173 +0000 UTC m=+0.043721328 container create f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:49:19 np0005605476 systemd[1]: Started libpod-conmon-f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792.scope.
Feb  2 12:49:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:49:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1535d34727f4c778619a5f1d835f2f2d5cd0204234406031841cf13662b24a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1535d34727f4c778619a5f1d835f2f2d5cd0204234406031841cf13662b24a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1535d34727f4c778619a5f1d835f2f2d5cd0204234406031841cf13662b24a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1535d34727f4c778619a5f1d835f2f2d5cd0204234406031841cf13662b24a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:49:19 np0005605476 nova_compute[239846]: 2026-02-02 17:49:19.716 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:19 np0005605476 podman[253949]: 2026-02-02 17:49:19.630408793 +0000 UTC m=+0.023754048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:49:19 np0005605476 podman[253949]: 2026-02-02 17:49:19.730963325 +0000 UTC m=+0.124308510 container init f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:49:19 np0005605476 podman[253949]: 2026-02-02 17:49:19.737289832 +0000 UTC m=+0.130635017 container start f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:49:19 np0005605476 podman[253949]: 2026-02-02 17:49:19.740601705 +0000 UTC m=+0.133946870 container attach f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb  2 12:49:20 np0005605476 lvm[254043]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:49:20 np0005605476 lvm[254043]: VG ceph_vg0 finished
Feb  2 12:49:20 np0005605476 lvm[254046]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:49:20 np0005605476 lvm[254046]: VG ceph_vg1 finished
Feb  2 12:49:20 np0005605476 lvm[254047]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:49:20 np0005605476 lvm[254047]: VG ceph_vg2 finished
Feb  2 12:49:20 np0005605476 bold_lalande[253967]: {}
Feb  2 12:49:20 np0005605476 systemd[1]: libpod-f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792.scope: Deactivated successfully.
Feb  2 12:49:20 np0005605476 podman[253949]: 2026-02-02 17:49:20.521992715 +0000 UTC m=+0.915337880 container died f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 12:49:20 np0005605476 systemd[1]: libpod-f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792.scope: Consumed 1.051s CPU time.
Feb  2 12:49:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f1535d34727f4c778619a5f1d835f2f2d5cd0204234406031841cf13662b24a3-merged.mount: Deactivated successfully.
Feb  2 12:49:20 np0005605476 podman[253949]: 2026-02-02 17:49:20.553853849 +0000 UTC m=+0.947199014 container remove f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:49:20 np0005605476 systemd[1]: libpod-conmon-f15fb07830184f7cd2dda70ad8b244190853f5e407ba032e2b9d704fac266792.scope: Deactivated successfully.
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:49:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:20.881 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.880 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:20.884 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.896 239853 DEBUG nova.compute.manager [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-changed-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.896 239853 DEBUG nova.compute.manager [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Refreshing instance network info cache due to event network-changed-727b7d70-b88e-4a8a-b74b-73820685a938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.896 239853 DEBUG oslo_concurrency.lockutils [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.896 239853 DEBUG oslo_concurrency.lockutils [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:49:20 np0005605476 nova_compute[239846]: 2026-02-02 17:49:20.896 239853 DEBUG nova.network.neutron [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Refreshing network info cache for port 727b7d70-b88e-4a8a-b74b-73820685a938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:49:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 44 KiB/s wr, 306 op/s
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268907741' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb  2 12:49:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb  2 12:49:21 np0005605476 nova_compute[239846]: 2026-02-02 17:49:21.879 239853 DEBUG nova.network.neutron [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updated VIF entry in instance network info cache for port 727b7d70-b88e-4a8a-b74b-73820685a938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:49:21 np0005605476 nova_compute[239846]: 2026-02-02 17:49:21.880 239853 DEBUG nova.network.neutron [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updating instance_info_cache with network_info: [{"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:49:21 np0005605476 nova_compute[239846]: 2026-02-02 17:49:21.901 239853 DEBUG oslo_concurrency.lockutils [req-9fa7e57b-17f7-409a-ba45-d65856c0ab11 req-d550cfae-7f14-4169-899f-5d67a4c7c14a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-a655d235-b578-4696-84d1-169799ca8ec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:49:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Feb  2 12:49:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Feb  2 12:49:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Feb  2 12:49:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 16 KiB/s wr, 274 op/s
Feb  2 12:49:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Feb  2 12:49:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Feb  2 12:49:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Feb  2 12:49:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4231937943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:23 np0005605476 nova_compute[239846]: 2026-02-02 17:49:23.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Feb  2 12:49:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Feb  2 12:49:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Feb  2 12:49:24 np0005605476 nova_compute[239846]: 2026-02-02 17:49:24.718 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.5 KiB/s wr, 37 op/s
Feb  2 12:49:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Feb  2 12:49:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Feb  2 12:49:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Feb  2 12:49:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb  2 12:49:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb  2 12:49:26 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb  2 12:49:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.5 KiB/s wr, 146 op/s
Feb  2 12:49:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb  2 12:49:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb  2 12:49:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb  2 12:49:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:28Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f1:d4:05 10.100.0.3
Feb  2 12:49:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:28Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f1:d4:05 10.100.0.3
Feb  2 12:49:28 np0005605476 nova_compute[239846]: 2026-02-02 17:49:28.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:29 np0005605476 nova_compute[239846]: 2026-02-02 17:49:29.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 225 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 427 KiB/s rd, 1.2 MiB/s wr, 183 op/s
Feb  2 12:49:29 np0005605476 nova_compute[239846]: 2026-02-02 17:49:29.721 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb  2 12:49:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb  2 12:49:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.254 239853 DEBUG oslo_concurrency.lockutils [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.255 239853 DEBUG oslo_concurrency.lockutils [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.271 239853 INFO nova.compute.manager [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Detaching volume 91799fd6-1cae-401b-9546-b25a8f483f08#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.396 239853 INFO nova.virt.block_device [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Attempting to driver detach volume 91799fd6-1cae-401b-9546-b25a8f483f08 from mountpoint /dev/vdb#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.409 239853 DEBUG nova.virt.libvirt.driver [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Attempting to detach device vdb from instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.410 239853 DEBUG nova.virt.libvirt.guest [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-91799fd6-1cae-401b-9546-b25a8f483f08">
Feb  2 12:49:30 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <serial>91799fd6-1cae-401b-9546-b25a8f483f08</serial>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:30 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.420 239853 INFO nova.virt.libvirt.driver [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully detached device vdb from instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f from the persistent domain config.#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.421 239853 DEBUG nova.virt.libvirt.driver [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.421 239853 DEBUG nova.virt.libvirt.guest [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-91799fd6-1cae-401b-9546-b25a8f483f08">
Feb  2 12:49:30 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <serial>91799fd6-1cae-401b-9546-b25a8f483f08</serial>
Feb  2 12:49:30 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:49:30 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:30 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.533 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054570.532847, 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.536 239853 DEBUG nova.virt.libvirt.driver [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.538 239853 INFO nova.virt.libvirt.driver [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully detached device vdb from instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f from the live domain config.#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.678 239853 DEBUG nova.objects.instance [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'flavor' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:30 np0005605476 nova_compute[239846]: 2026-02-02 17:49:30.720 239853 DEBUG oslo_concurrency.lockutils [None req-39ec99e0-81e2-4ffc-b33a-0b1337f503b2 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:30.886 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 246 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 706 KiB/s rd, 4.4 MiB/s wr, 275 op/s
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.583 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.583 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.584 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.584 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.585 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.586 239853 INFO nova.compute.manager [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Terminating instance#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.588 239853 DEBUG nova.compute.manager [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:49:31 np0005605476 kernel: tap07fd1022-70 (unregistering): left promiscuous mode
Feb  2 12:49:31 np0005605476 NetworkManager[49022]: <info>  [1770054571.6424] device (tap07fd1022-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:49:31 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:31Z|00108|binding|INFO|Releasing lport 07fd1022-7037-4a03-8c56-737464703551 from this chassis (sb_readonly=0)
Feb  2 12:49:31 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:31Z|00109|binding|INFO|Setting lport 07fd1022-7037-4a03-8c56-737464703551 down in Southbound
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.651 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:31Z|00110|binding|INFO|Removing iface tap07fd1022-70 ovn-installed in OVS
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.654 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.661 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f3:40 10.100.0.6'], port_security=['fa:16:3e:00:f3:40 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d00d4e2-c297-40a8-b6fe-9418b8da0b2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962ccc49-6579-46f5-b577-7995d4fef976', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ff6dfb8be334eeb94d13588a609b2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a28a626-93bb-44f5-9e6f-8b218f41aeb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58e5e8fa-47da-4a70-b729-f06398e2ea5a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=07fd1022-7037-4a03-8c56-737464703551) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.664 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 07fd1022-7037-4a03-8c56-737464703551 in datapath 962ccc49-6579-46f5-b577-7995d4fef976 unbound from our chassis#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.667 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962ccc49-6579-46f5-b577-7995d4fef976, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.669 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3f0f1e-5b5f-448e-a178-e0634f66d1d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.670 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 namespace which is not needed anymore#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.670 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb  2 12:49:31 np0005605476 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.716s CPU time.
Feb  2 12:49:31 np0005605476 systemd-machined[208080]: Machine qemu-8-instance-00000008 terminated.
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.826 239853 INFO nova.virt.libvirt.driver [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Instance destroyed successfully.#033[00m
Feb  2 12:49:31 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [NOTICE]   (252796) : haproxy version is 2.8.14-c23fe91
Feb  2 12:49:31 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [NOTICE]   (252796) : path to executable is /usr/sbin/haproxy
Feb  2 12:49:31 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [WARNING]  (252796) : Exiting Master process...
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.828 239853 DEBUG nova.objects.instance [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lazy-loading 'resources' on Instance uuid 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:31 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [ALERT]    (252796) : Current worker (252798) exited with code 143 (Terminated)
Feb  2 12:49:31 np0005605476 neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976[252792]: [WARNING]  (252796) : All workers exited. Exiting... (0)
Feb  2 12:49:31 np0005605476 systemd[1]: libpod-58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29.scope: Deactivated successfully.
Feb  2 12:49:31 np0005605476 podman[254115]: 2026-02-02 17:49:31.83948454 +0000 UTC m=+0.058072820 container died 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.840 239853 DEBUG nova.virt.libvirt.vif [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:48:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-441057123',display_name='tempest-VolumesBackupsTest-instance-441057123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-441057123',id=8,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNTCyGzGvgWBxW2biKNEkphRdb+/933KJloZq2c5+QHh0667htFhqdayfXzcKBdVt/9i5Q4P+p7ZcAAXnsFy6XQPvwjP47n4nw8+X/mzl+GON90vJUqVbTo46HKL78gj0A==',key_name='tempest-keypair-62621088',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:48:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ff6dfb8be334eeb94d13588a609b2bd',ramdisk_id='',reservation_id='r-oi1y7ziq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-27790021',owner_user_name='tempest-VolumesBackupsTest-27790021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:48:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b2b7987477543268373aac3ffda0c37',uuid=8d00d4e2-c297-40a8-b6fe-9418b8da0b2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.841 239853 DEBUG nova.network.os_vif_util [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converting VIF {"id": "07fd1022-7037-4a03-8c56-737464703551", "address": "fa:16:3e:00:f3:40", "network": {"id": "962ccc49-6579-46f5-b577-7995d4fef976", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1852707247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ff6dfb8be334eeb94d13588a609b2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd1022-70", "ovs_interfaceid": "07fd1022-7037-4a03-8c56-737464703551", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.842 239853 DEBUG nova.network.os_vif_util [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.842 239853 DEBUG os_vif [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.843 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.843 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07fd1022-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.845 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.847 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.849 239853 INFO os_vif [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:f3:40,bridge_name='br-int',has_traffic_filtering=True,id=07fd1022-7037-4a03-8c56-737464703551,network=Network(962ccc49-6579-46f5-b577-7995d4fef976),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd1022-70')#033[00m
Feb  2 12:49:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29-userdata-shm.mount: Deactivated successfully.
Feb  2 12:49:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2507fe5a29bc482580b96a13dae67f7d8b04fe857cd4302c4bf15848bb5bdba7-merged.mount: Deactivated successfully.
Feb  2 12:49:31 np0005605476 podman[254115]: 2026-02-02 17:49:31.878339821 +0000 UTC m=+0.096928121 container cleanup 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.882 239853 DEBUG nova.compute.manager [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-unplugged-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.883 239853 DEBUG oslo_concurrency.lockutils [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.883 239853 DEBUG oslo_concurrency.lockutils [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.884 239853 DEBUG oslo_concurrency.lockutils [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.884 239853 DEBUG nova.compute.manager [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] No waiting events found dispatching network-vif-unplugged-07fd1022-7037-4a03-8c56-737464703551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.885 239853 DEBUG nova.compute.manager [req-242551b4-345c-4c18-a7a3-bd2d3dc5a757 req-785d015f-6704-4664-a896-a0a9adc9e9d4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-unplugged-07fd1022-7037-4a03-8c56-737464703551 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:49:31 np0005605476 systemd[1]: libpod-conmon-58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29.scope: Deactivated successfully.
Feb  2 12:49:31 np0005605476 podman[254167]: 2026-02-02 17:49:31.938765437 +0000 UTC m=+0.035732504 container remove 58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.942 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[347a681e-9046-4b3e-b3a7-654172ef7783]: (4, ('Mon Feb  2 05:49:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 (58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29)\n58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29\nMon Feb  2 05:49:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 (58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29)\n58562e66d69aeec0ebf9e012b05092abef78f82df0ebb2b57afc6a15f471aa29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.944 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fbeda5-67b4-4f7d-8869-e51bd882ea74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.945 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962ccc49-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.948 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 kernel: tap962ccc49-60: left promiscuous mode
Feb  2 12:49:31 np0005605476 nova_compute[239846]: 2026-02-02 17:49:31.955 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.957 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7ded876b-38af-4027-8ee3-af2b7ca8c184]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.969 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[05cac614-8a0a-425e-8e9e-ff8ae610afe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.971 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[52d8d8ae-4b37-4b03-939c-7e530577b8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.986 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e84e46ca-a2e1-4ae2-9318-c58286b1fa4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377395, 'reachable_time': 24564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254186, 'error': None, 'target': 'ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:31 np0005605476 systemd[1]: run-netns-ovnmeta\x2d962ccc49\x2d6579\x2d46f5\x2db577\x2d7995d4fef976.mount: Deactivated successfully.
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.991 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962ccc49-6579-46f5-b577-7995d4fef976 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:49:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:31.991 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[919ddc95-0de8-40d7-b747-9797337980df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb  2 12:49:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb  2 12:49:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.109 239853 INFO nova.virt.libvirt.driver [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Deleting instance files /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_del#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.110 239853 INFO nova.virt.libvirt.driver [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Deletion of /var/lib/nova/instances/8d00d4e2-c297-40a8-b6fe-9418b8da0b2f_del complete#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.158 239853 INFO nova.compute.manager [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.159 239853 DEBUG oslo.service.loopingcall [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.159 239853 DEBUG nova.compute.manager [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.159 239853 DEBUG nova.network.neutron [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.285 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.946 239853 DEBUG nova.network.neutron [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:49:32 np0005605476 nova_compute[239846]: 2026-02-02 17:49:32.966 239853 INFO nova.compute.manager [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Took 0.81 seconds to deallocate network for instance.#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.017 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.017 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb  2 12:49:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb  2 12:49:33 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.097 239853 DEBUG oslo_concurrency.processutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.266 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 246 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 862 KiB/s rd, 5.4 MiB/s wr, 335 op/s
Feb  2 12:49:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:49:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029231142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.637 239853 DEBUG oslo_concurrency.processutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.642 239853 DEBUG nova.compute.provider_tree [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.664 239853 DEBUG nova.scheduler.client.report [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.690 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.722 239853 INFO nova.scheduler.client.report [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Deleted allocations for instance 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f#033[00m
Feb  2 12:49:33 np0005605476 nova_compute[239846]: 2026-02-02 17:49:33.816 239853 DEBUG oslo_concurrency.lockutils [None req-cc06e32f-ac4b-4825-8162-4c2d55f9a1b7 7b2b7987477543268373aac3ffda0c37 7ff6dfb8be334eeb94d13588a609b2bd - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb  2 12:49:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb  2 12:49:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.162 239853 DEBUG nova.compute.manager [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.162 239853 DEBUG oslo_concurrency.lockutils [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.162 239853 DEBUG oslo_concurrency.lockutils [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.162 239853 DEBUG oslo_concurrency.lockutils [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "8d00d4e2-c297-40a8-b6fe-9418b8da0b2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.163 239853 DEBUG nova.compute.manager [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] No waiting events found dispatching network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.163 239853 WARNING nova.compute.manager [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received unexpected event network-vif-plugged-07fd1022-7037-4a03-8c56-737464703551 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.163 239853 DEBUG nova.compute.manager [req-7cd527e9-2a9b-4413-b8b4-ace17828766b req-4ccb45b1-c39c-4bdc-9477-0946cc289cb7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Received event network-vif-deleted-07fd1022-7037-4a03-8c56-737464703551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.264 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.264 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.264 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.265 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.736 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:49:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809600926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.869 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.945 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:49:34 np0005605476 nova_compute[239846]: 2026-02-02 17:49:34.946 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.157 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.158 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4395MB free_disk=59.89714303519577GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.158 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.159 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.230 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance a655d235-b578-4696-84d1-169799ca8ec5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.231 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.231 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.285 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 200 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 36 KiB/s wr, 125 op/s
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928850442' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928850442' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:49:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224372175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.822 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.828 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.861 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.890 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:49:35 np0005605476 nova_compute[239846]: 2026-02-02 17:49:35.890 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3010166249' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3010166249' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.721 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.722 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.735 239853 DEBUG nova.objects.instance [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:49:36
Feb  2 12:49:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:49:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:49:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Feb  2 12:49:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.754 239853 INFO nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.768 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.845 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.891 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.921 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.921 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:36 np0005605476 nova_compute[239846]: 2026-02-02 17:49:36.922 239853 INFO nova.compute.manager [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Attaching volume fd719620-d943-42ae-b3a6-4b152f79f1da to /dev/vdb#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.035 239853 DEBUG os_brick.utils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.036 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.045 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.046 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[e810359c-7f12-4656-bec2-c09dccee45fc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.047 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.055 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.055 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[8f775f97-fa3e-47ca-94e2-463ca59587f9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.056 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.064 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.064 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[dd529ac2-d286-4f5f-b70e-f205e6ec1ec6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.066 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6eaba7-f267-4712-8ed0-0a744f1d9bbe]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.066 239853 DEBUG oslo_concurrency.processutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.086 239853 DEBUG oslo_concurrency.processutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.088 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.088 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.088 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.089 239853 DEBUG os_brick.utils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.089 239853 DEBUG nova.virt.block_device [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updating existing volume attachment record: 754b3437-ce2d-4ab0-8500-60c68997b854 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:49:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb  2 12:49:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb  2 12:49:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb  2 12:49:37 np0005605476 nova_compute[239846]: 2026-02-02 17:49:37.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 44 KiB/s wr, 224 op/s
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:49:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:49:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297516112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.062 239853 DEBUG nova.objects.instance [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.087 239853 DEBUG nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Attempting to attach volume fd719620-d943-42ae-b3a6-4b152f79f1da with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.090 239853 DEBUG nova.virt.libvirt.guest [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-fd719620-d943-42ae-b3a6-4b152f79f1da">
Feb  2 12:49:38 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:49:38 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:38 np0005605476 nova_compute[239846]:  <serial>fd719620-d943-42ae-b3a6-4b152f79f1da</serial>
Feb  2 12:49:38 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:38 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.196 239853 DEBUG nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.197 239853 DEBUG nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.197 239853 DEBUG nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.198 239853 DEBUG nova.virt.libvirt.driver [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] No VIF found with MAC fa:16:3e:f1:d4:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:38 np0005605476 nova_compute[239846]: 2026-02-02 17:49:38.401 239853 DEBUG oslo_concurrency.lockutils [None req-8a1a77e7-1e68-4e61-9ad4-6fd88bd76ff2 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585380672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585380672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb  2 12:49:39 np0005605476 nova_compute[239846]: 2026-02-02 17:49:39.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:49:39 np0005605476 nova_compute[239846]: 2026-02-02 17:49:39.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:49:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 10 KiB/s wr, 189 op/s
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464350000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464350000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:39 np0005605476 nova_compute[239846]: 2026-02-02 17:49:39.738 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb  2 12:49:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb  2 12:49:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb  2 12:49:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb  2 12:49:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb  2 12:49:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb  2 12:49:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 8.7 KiB/s wr, 212 op/s
Feb  2 12:49:41 np0005605476 podman[254282]: 2026-02-02 17:49:41.629713396 +0000 UTC m=+0.077696422 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:49:41 np0005605476 nova_compute[239846]: 2026-02-02 17:49:41.847 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:42Z|00111|binding|INFO|Releasing lport c8fb2ce4-77e1-4c4e-bd85-babb3a20f6eb from this chassis (sb_readonly=0)
Feb  2 12:49:42 np0005605476 nova_compute[239846]: 2026-02-02 17:49:42.358 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb  2 12:49:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb  2 12:49:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb  2 12:49:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.1 KiB/s wr, 115 op/s
Feb  2 12:49:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb  2 12:49:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb  2 12:49:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb  2 12:49:44 np0005605476 podman[254301]: 2026-02-02 17:49:44.645077711 +0000 UTC m=+0.086457648 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:49:44 np0005605476 nova_compute[239846]: 2026-02-02 17:49:44.740 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb  2 12:49:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb  2 12:49:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb  2 12:49:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 15 KiB/s wr, 96 op/s
Feb  2 12:49:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb  2 12:49:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb  2 12:49:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb  2 12:49:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:46.639 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:46.640 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:46.641 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:46 np0005605476 nova_compute[239846]: 2026-02-02 17:49:46.823 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054571.8215654, 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:49:46 np0005605476 nova_compute[239846]: 2026-02-02 17:49:46.823 239853 INFO nova.compute.manager [-] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:49:46 np0005605476 nova_compute[239846]: 2026-02-02 17:49:46.847 239853 DEBUG nova.compute.manager [None req-92f77bdb-03ef-4268-8ce5-9d112326db94 - - - - - -] [instance: 8d00d4e2-c297-40a8-b6fe-9418b8da0b2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:49:46 np0005605476 nova_compute[239846]: 2026-02-02 17:49:46.849 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 16 KiB/s wr, 110 op/s
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007619956527563901 of space, bias 1.0, pg target 0.22859869582691703 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003534537193217361 of space, bias 1.0, pg target 0.10603611579652084 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2780086613962052e-06 of space, bias 1.0, pg target 0.0003834025984188616 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660808690662239 of space, bias 1.0, pg target 0.19982426071986717 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.907201366669236e-07 of space, bias 4.0, pg target 0.0010688641640003082 quantized to 16 (current 16)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:49:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.038 239853 DEBUG oslo_concurrency.lockutils [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.038 239853 DEBUG oslo_concurrency.lockutils [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.051 239853 INFO nova.compute.manager [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Detaching volume fd719620-d943-42ae-b3a6-4b152f79f1da#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.210 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4235144401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.452 239853 INFO nova.virt.block_device [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Attempting to driver detach volume fd719620-d943-42ae-b3a6-4b152f79f1da from mountpoint /dev/vdb#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.462 239853 DEBUG nova.virt.libvirt.driver [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Attempting to detach device vdb from instance a655d235-b578-4696-84d1-169799ca8ec5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.463 239853 DEBUG nova.virt.libvirt.guest [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-fd719620-d943-42ae-b3a6-4b152f79f1da">
Feb  2 12:49:48 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <serial>fd719620-d943-42ae-b3a6-4b152f79f1da</serial>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:48 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.473 239853 INFO nova.virt.libvirt.driver [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully detached device vdb from instance a655d235-b578-4696-84d1-169799ca8ec5 from the persistent domain config.#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.474 239853 DEBUG nova.virt.libvirt.driver [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a655d235-b578-4696-84d1-169799ca8ec5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.475 239853 DEBUG nova.virt.libvirt.guest [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-fd719620-d943-42ae-b3a6-4b152f79f1da">
Feb  2 12:49:48 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <serial>fd719620-d943-42ae-b3a6-4b152f79f1da</serial>
Feb  2 12:49:48 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:49:48 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:49:48 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.568 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054588.567802, a655d235-b578-4696-84d1-169799ca8ec5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.570 239853 DEBUG nova.virt.libvirt.driver [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a655d235-b578-4696-84d1-169799ca8ec5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.573 239853 INFO nova.virt.libvirt.driver [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully detached device vdb from instance a655d235-b578-4696-84d1-169799ca8ec5 from the live domain config.#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.735 239853 DEBUG nova.objects.instance [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'flavor' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.776 239853 DEBUG oslo_concurrency.lockutils [None req-c111e177-042e-414a-8b46-57c81ac2875c e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.777 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.777 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.778 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.778 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.780 239853 INFO nova.compute.manager [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Terminating instance#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.782 239853 DEBUG nova.compute.manager [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:49:48 np0005605476 kernel: tap727b7d70-b8 (unregistering): left promiscuous mode
Feb  2 12:49:48 np0005605476 NetworkManager[49022]: <info>  [1770054588.8292] device (tap727b7d70-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:49:48 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:48Z|00112|binding|INFO|Releasing lport 727b7d70-b88e-4a8a-b74b-73820685a938 from this chassis (sb_readonly=0)
Feb  2 12:49:48 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:48Z|00113|binding|INFO|Setting lport 727b7d70-b88e-4a8a-b74b-73820685a938 down in Southbound
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.864 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:48 np0005605476 ovn_controller[146041]: 2026-02-02T17:49:48Z|00114|binding|INFO|Removing iface tap727b7d70-b8 ovn-installed in OVS
Feb  2 12:49:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:48.871 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:d4:05 10.100.0.3'], port_security=['fa:16:3e:f1:d4:05 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a655d235-b578-4696-84d1-169799ca8ec5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b00a155c-f468-43b5-8966-400475f07a2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a06203a436464cf3968b3ecfc022e1dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '074795d7-6af3-42de-aea8-6dbfcd1ba557', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be054af0-a896-42b9-84a2-8460e7163b78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=727b7d70-b88e-4a8a-b74b-73820685a938) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:49:48 np0005605476 nova_compute[239846]: 2026-02-02 17:49:48.872 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:48.874 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 727b7d70-b88e-4a8a-b74b-73820685a938 in datapath b00a155c-f468-43b5-8966-400475f07a2d unbound from our chassis#033[00m
Feb  2 12:49:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:48.875 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b00a155c-f468-43b5-8966-400475f07a2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:49:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:48.877 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed553e8-ba3b-4b95-95dd-d0a3e4bcecb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:48.878 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d namespace which is not needed anymore#033[00m
Feb  2 12:49:48 np0005605476 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb  2 12:49:48 np0005605476 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.520s CPU time.
Feb  2 12:49:48 np0005605476 systemd-machined[208080]: Machine qemu-9-instance-00000009 terminated.
Feb  2 12:49:48 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [NOTICE]   (253460) : haproxy version is 2.8.14-c23fe91
Feb  2 12:49:48 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [NOTICE]   (253460) : path to executable is /usr/sbin/haproxy
Feb  2 12:49:48 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [WARNING]  (253460) : Exiting Master process...
Feb  2 12:49:48 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [ALERT]    (253460) : Current worker (253479) exited with code 143 (Terminated)
Feb  2 12:49:48 np0005605476 neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d[253450]: [WARNING]  (253460) : All workers exited. Exiting... (0)
Feb  2 12:49:48 np0005605476 systemd[1]: libpod-468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e.scope: Deactivated successfully.
Feb  2 12:49:48 np0005605476 podman[254353]: 2026-02-02 17:49:48.995426029 +0000 UTC m=+0.048875203 container died 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:49:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e-userdata-shm.mount: Deactivated successfully.
Feb  2 12:49:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e14c8bbcefa4a851871f56ad348c9cc75b48f62acf4b2c11bd5b67fcf7abe73e-merged.mount: Deactivated successfully.
Feb  2 12:49:49 np0005605476 podman[254353]: 2026-02-02 17:49:49.038039015 +0000 UTC m=+0.091488149 container cleanup 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.037 239853 INFO nova.virt.libvirt.driver [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Instance destroyed successfully.#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.038 239853 DEBUG nova.objects.instance [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lazy-loading 'resources' on Instance uuid a655d235-b578-4696-84d1-169799ca8ec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:49:49 np0005605476 systemd[1]: libpod-conmon-468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e.scope: Deactivated successfully.
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.057 239853 DEBUG nova.virt.libvirt.vif [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1422861368',display_name='tempest-VolumesSnapshotTestJSON-instance-1422861368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1422861368',id=9,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAadW9ykLaVsAIfNYDHzPuv1Na7Doa6R5vVeQXqUt7lUHuhwtCyoz3QzihZxzt2hJm+pPRqFvSpZruqVCfNz5jtWZaXln5ng4w9NzTpfw+dF+vvFINflO0q6xWAVj0/5BQ==',key_name='tempest-keypair-1118395760',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:49:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a06203a436464cf3968b3ecfc022e1dd',ramdisk_id='',reservation_id='r-2t03ay2v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-2080120933',owner_user_name='tempest-VolumesSnapshotTestJSON-2080120933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:49:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5e6162e875a40d7b58553a223857aa3',uuid=a655d235-b578-4696-84d1-169799ca8ec5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.058 239853 DEBUG nova.network.os_vif_util [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converting VIF {"id": "727b7d70-b88e-4a8a-b74b-73820685a938", "address": "fa:16:3e:f1:d4:05", "network": {"id": "b00a155c-f468-43b5-8966-400475f07a2d", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-208351556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a06203a436464cf3968b3ecfc022e1dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap727b7d70-b8", "ovs_interfaceid": "727b7d70-b88e-4a8a-b74b-73820685a938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.059 239853 DEBUG nova.network.os_vif_util [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.060 239853 DEBUG os_vif [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.063 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.064 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap727b7d70-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.067 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.072 239853 DEBUG nova.compute.manager [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-unplugged-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.072 239853 DEBUG oslo_concurrency.lockutils [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.073 239853 DEBUG oslo_concurrency.lockutils [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.073 239853 DEBUG oslo_concurrency.lockutils [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.074 239853 DEBUG nova.compute.manager [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] No waiting events found dispatching network-vif-unplugged-727b7d70-b88e-4a8a-b74b-73820685a938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.074 239853 DEBUG nova.compute.manager [req-4ba616a3-4fc3-4fb8-b040-ee01c36d02cc req-09b34b58-0ea8-4f34-897b-90cba6e62828 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-unplugged-727b7d70-b88e-4a8a-b74b-73820685a938 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.075 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.078 239853 INFO os_vif [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:d4:05,bridge_name='br-int',has_traffic_filtering=True,id=727b7d70-b88e-4a8a-b74b-73820685a938,network=Network(b00a155c-f468-43b5-8966-400475f07a2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap727b7d70-b8')#033[00m
Feb  2 12:49:49 np0005605476 podman[254394]: 2026-02-02 17:49:49.093023928 +0000 UTC m=+0.038510112 container remove 468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.098 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[eac14708-baab-447d-9fd6-137982cff564]: (4, ('Mon Feb  2 05:49:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d (468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e)\n468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e\nMon Feb  2 05:49:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d (468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e)\n468a46686b6eb67d7ac79bea175bbd35d2052c220a1f87b2f771d05bd475718e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.100 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0efd366-4087-4c95-9cb5-56aa3f42f16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.101 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb00a155c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.103 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:49 np0005605476 kernel: tapb00a155c-f0: left promiscuous mode
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.111 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.114 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[27d760f2-349b-4063-81e5-ac061a34dce0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.128 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[37f3ec15-218b-44d8-87d4-8f78b302364a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.129 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9097e5-30a7-4666-8785-542aeae7bfb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.146 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf5fb05-e469-4d4a-bd95-72b2a9402fac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379670, 'reachable_time': 28137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254425, 'error': None, 'target': 'ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.150 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b00a155c-f468-43b5-8966-400475f07a2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:49:49 np0005605476 systemd[1]: run-netns-ovnmeta\x2db00a155c\x2df468\x2d43b5\x2d8966\x2d400475f07a2d.mount: Deactivated successfully.
Feb  2 12:49:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:49:49.150 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[99b48e24-4eea-4d8b-ba62-f492858f0fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:49:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb  2 12:49:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb  2 12:49:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.371 239853 INFO nova.virt.libvirt.driver [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Deleting instance files /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5_del#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.371 239853 INFO nova.virt.libvirt.driver [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Deletion of /var/lib/nova/instances/a655d235-b578-4696-84d1-169799ca8ec5_del complete#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.416 239853 INFO nova.compute.manager [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.416 239853 DEBUG oslo.service.loopingcall [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.416 239853 DEBUG nova.compute.manager [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.416 239853 DEBUG nova.network.neutron [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:49:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 57 op/s
Feb  2 12:49:49 np0005605476 nova_compute[239846]: 2026-02-02 17:49:49.741 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508914457' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508914457' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.694 239853 DEBUG nova.network.neutron [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.715 239853 INFO nova.compute.manager [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Took 1.30 seconds to deallocate network for instance.#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.810 239853 DEBUG nova.compute.manager [req-6a5ac3e8-dd69-4eb7-b416-16d929900534 req-9575338e-e86d-40ae-8e85-29caeab5fd28 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-deleted-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.852 239853 WARNING nova.volume.cinder [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Attachment 754b3437-ce2d-4ab0-8500-60c68997b854 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 754b3437-ce2d-4ab0-8500-60c68997b854. (HTTP 404) (Request-ID: req-e386ec03-9bd9-4175-b89f-60da39135839)#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.852 239853 INFO nova.compute.manager [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.908 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.909 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:50 np0005605476 nova_compute[239846]: 2026-02-02 17:49:50.981 239853 DEBUG oslo_concurrency.processutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.210 239853 DEBUG nova.compute.manager [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.211 239853 DEBUG oslo_concurrency.lockutils [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a655d235-b578-4696-84d1-169799ca8ec5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.212 239853 DEBUG oslo_concurrency.lockutils [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.213 239853 DEBUG oslo_concurrency.lockutils [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.214 239853 DEBUG nova.compute.manager [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] No waiting events found dispatching network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.214 239853 WARNING nova.compute.manager [req-f6817155-ab5f-4b37-abc7-2f7bf57a8785 req-ebe2952a-9f41-4f79-a77c-26c2b1803e67 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Received unexpected event network-vif-plugged-727b7d70-b88e-4a8a-b74b-73820685a938 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:49:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb  2 12:49:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb  2 12:49:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb  2 12:49:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 110 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 11 KiB/s wr, 134 op/s
Feb  2 12:49:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:49:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143982494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.520 239853 DEBUG oslo_concurrency.processutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.525 239853 DEBUG nova.compute.provider_tree [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.540 239853 DEBUG nova.scheduler.client.report [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.560 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.584 239853 INFO nova.scheduler.client.report [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Deleted allocations for instance a655d235-b578-4696-84d1-169799ca8ec5#033[00m
Feb  2 12:49:51 np0005605476 nova_compute[239846]: 2026-02-02 17:49:51.651 239853 DEBUG oslo_concurrency.lockutils [None req-3cddef50-dcf3-4b9f-8ab3-887e9410bb59 e5e6162e875a40d7b58553a223857aa3 a06203a436464cf3968b3ecfc022e1dd - - default default] Lock "a655d235-b578-4696-84d1-169799ca8ec5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:49:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb  2 12:49:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb  2 12:49:52 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb  2 12:49:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 110 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 9.3 KiB/s wr, 115 op/s
Feb  2 12:49:54 np0005605476 nova_compute[239846]: 2026-02-02 17:49:54.065 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/23664210' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/23664210' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:54 np0005605476 nova_compute[239846]: 2026-02-02 17:49:54.742 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:49:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb  2 12:49:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb  2 12:49:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb  2 12:49:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 8.2 KiB/s wr, 235 op/s
Feb  2 12:49:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb  2 12:49:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb  2 12:49:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb  2 12:49:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 10 KiB/s wr, 238 op/s
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2227462395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1045567312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:49:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1045567312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:49:59 np0005605476 nova_compute[239846]: 2026-02-02 17:49:59.067 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:49:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb  2 12:49:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb  2 12:49:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb  2 12:49:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.6 KiB/s wr, 114 op/s
Feb  2 12:49:59 np0005605476 nova_compute[239846]: 2026-02-02 17:49:59.744 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb  2 12:50:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb  2 12:50:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb  2 12:50:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 9.3 KiB/s wr, 156 op/s
Feb  2 12:50:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3789690619' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3789690619' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.1 KiB/s wr, 97 op/s
Feb  2 12:50:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb  2 12:50:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb  2 12:50:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb  2 12:50:04 np0005605476 nova_compute[239846]: 2026-02-02 17:50:04.035 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054589.0341437, a655d235-b578-4696-84d1-169799ca8ec5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:04 np0005605476 nova_compute[239846]: 2026-02-02 17:50:04.036 239853 INFO nova.compute.manager [-] [instance: a655d235-b578-4696-84d1-169799ca8ec5] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:50:04 np0005605476 nova_compute[239846]: 2026-02-02 17:50:04.062 239853 DEBUG nova.compute.manager [None req-0f0bb41e-4cd4-4e16-ba87-5e85723b40b7 - - - - - -] [instance: a655d235-b578-4696-84d1-169799ca8ec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:04 np0005605476 nova_compute[239846]: 2026-02-02 17:50:04.068 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb  2 12:50:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb  2 12:50:04 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb  2 12:50:04 np0005605476 nova_compute[239846]: 2026-02-02 17:50:04.745 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3545498178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3545498178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb  2 12:50:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 107 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 217 op/s
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899836349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899836349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385223793' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385223793' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178462810' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178462810' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4036187456' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4036187456' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3821644969' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3821644969' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:09 np0005605476 nova_compute[239846]: 2026-02-02 17:50:09.070 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2392980577' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2392980577' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Feb  2 12:50:09 np0005605476 nova_compute[239846]: 2026-02-02 17:50:09.782 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1695740707' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1695740707' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/339522630' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/339522630' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1619412204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1619412204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 88 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 2.1 MiB/s wr, 319 op/s
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1662548916' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1662548916' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:12 np0005605476 podman[254449]: 2026-02-02 17:50:12.609737956 +0000 UTC m=+0.057391502 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 12:50:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 88 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 1.4 MiB/s wr, 265 op/s
Feb  2 12:50:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126392047' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126392047' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:14 np0005605476 nova_compute[239846]: 2026-02-02 17:50:14.072 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:14 np0005605476 nova_compute[239846]: 2026-02-02 17:50:14.106 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:14 np0005605476 nova_compute[239846]: 2026-02-02 17:50:14.155 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:14 np0005605476 nova_compute[239846]: 2026-02-02 17:50:14.783 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 7.7 KiB/s wr, 271 op/s
Feb  2 12:50:15 np0005605476 podman[254469]: 2026-02-02 17:50:15.617733593 +0000 UTC m=+0.064377838 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.164736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617164897, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2525, "num_deletes": 272, "total_data_size": 3499500, "memory_usage": 3569344, "flush_reason": "Manual Compaction"}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617189811, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3438835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21441, "largest_seqno": 23965, "table_properties": {"data_size": 3426794, "index_size": 8023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25526, "raw_average_key_size": 21, "raw_value_size": 3402612, "raw_average_value_size": 2890, "num_data_blocks": 347, "num_entries": 1177, "num_filter_entries": 1177, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054475, "oldest_key_time": 1770054475, "file_creation_time": 1770054617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 25116 microseconds, and 4926 cpu microseconds.
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.189850) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3438835 bytes OK
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.189867) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.191798) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.191809) EVENT_LOG_v1 {"time_micros": 1770054617191806, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.191826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3488513, prev total WAL file size 3488513, number of live WAL files 2.
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.192474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3358KB)], [50(7256KB)]
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617192503, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10869586, "oldest_snapshot_seqno": -1}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5251 keys, 9144872 bytes, temperature: kUnknown
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617226438, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9144872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9105602, "index_size": 25016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 129549, "raw_average_key_size": 24, "raw_value_size": 9006893, "raw_average_value_size": 1715, "num_data_blocks": 1031, "num_entries": 5251, "num_filter_entries": 5251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.226693) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9144872 bytes
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.227835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 319.5 rd, 268.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5793, records dropped: 542 output_compression: NoCompression
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.227854) EVENT_LOG_v1 {"time_micros": 1770054617227844, "job": 26, "event": "compaction_finished", "compaction_time_micros": 34022, "compaction_time_cpu_micros": 15594, "output_level": 6, "num_output_files": 1, "total_output_size": 9144872, "num_input_records": 5793, "num_output_records": 5251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617228388, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054617229135, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.192406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.229163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.229167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.229169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.229171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:50:17.229173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:50:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 8.6 KiB/s wr, 272 op/s
Feb  2 12:50:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:50:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5284 writes, 23K keys, 5284 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5284 writes, 5284 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1917 writes, 8781 keys, 1917 commit groups, 1.0 writes per commit group, ingest: 11.38 MB, 0.02 MB/s#012Interval WAL: 1917 writes, 1917 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     66.7      0.42              0.07        13    0.032       0      0       0.0       0.0#012  L6      1/0    8.72 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    179.5    148.3      0.62              0.21        12    0.051     55K   6351       0.0       0.0#012 Sum      1/0    8.72 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    106.9    115.3      1.04              0.28        25    0.042     55K   6351       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2     93.7     96.4      0.63              0.13        12    0.053     30K   3147       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    179.5    148.3      0.62              0.21        12    0.051     55K   6351       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     67.0      0.42              0.07        12    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.027, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.11 GB read, 0.06 MB/s read, 1.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 304.00 MB usage: 11.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(709,10.94 MB,3.5988%) FilterBlock(26,159.36 KB,0.0511922%) IndexBlock(26,311.27 KB,0.0999902%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 12:50:19 np0005605476 nova_compute[239846]: 2026-02-02 17:50:19.074 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 6.2 KiB/s wr, 178 op/s
Feb  2 12:50:19 np0005605476 nova_compute[239846]: 2026-02-02 17:50:19.784 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb  2 12:50:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb  2 12:50:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb  2 12:50:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:21.095 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:50:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:21.096 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:50:21 np0005605476 nova_compute[239846]: 2026-02-02 17:50:21.096 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:50:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:50:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.8 KiB/s wr, 89 op/s
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.580249416 +0000 UTC m=+0.038378028 container create ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:50:21 np0005605476 systemd[1]: Started libpod-conmon-ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac.scope.
Feb  2 12:50:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.653614565 +0000 UTC m=+0.111743197 container init ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.562513058 +0000 UTC m=+0.020641690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.65948916 +0000 UTC m=+0.117617772 container start ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.662534516 +0000 UTC m=+0.120663138 container attach ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:50:21 np0005605476 naughty_hypatia[254656]: 167 167
Feb  2 12:50:21 np0005605476 systemd[1]: libpod-ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac.scope: Deactivated successfully.
Feb  2 12:50:21 np0005605476 conmon[254656]: conmon ba91e5cf9784c6a14faf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac.scope/container/memory.events
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.665508279 +0000 UTC m=+0.123636891 container died ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:50:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6f07330246fbb20ce5a32bb3a062032b33d015460427fe71211b6ccab9e75747-merged.mount: Deactivated successfully.
Feb  2 12:50:21 np0005605476 podman[254640]: 2026-02-02 17:50:21.700940783 +0000 UTC m=+0.159069385 container remove ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:50:21 np0005605476 systemd[1]: libpod-conmon-ba91e5cf9784c6a14fafd6aca8a436708107ac79d1a846a4cb5941342da68eac.scope: Deactivated successfully.
Feb  2 12:50:21 np0005605476 podman[254681]: 2026-02-02 17:50:21.814019717 +0000 UTC m=+0.035613301 container create 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Feb  2 12:50:21 np0005605476 systemd[1]: Started libpod-conmon-6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4.scope.
Feb  2 12:50:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:21 np0005605476 podman[254681]: 2026-02-02 17:50:21.799573701 +0000 UTC m=+0.021167255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:21 np0005605476 podman[254681]: 2026-02-02 17:50:21.905979618 +0000 UTC m=+0.127573252 container init 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:50:21 np0005605476 podman[254681]: 2026-02-02 17:50:21.913280103 +0000 UTC m=+0.134873657 container start 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:50:21 np0005605476 podman[254681]: 2026-02-02 17:50:21.916603576 +0000 UTC m=+0.138197130 container attach 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:50:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:50:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:50:22 np0005605476 ecstatic_hugle[254698]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:50:22 np0005605476 ecstatic_hugle[254698]: --> All data devices are unavailable
Feb  2 12:50:22 np0005605476 systemd[1]: libpod-6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4.scope: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254681]: 2026-02-02 17:50:22.336647624 +0000 UTC m=+0.558241178 container died 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:50:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8e1abbb14f35d920723ada1d323478e77806124541d43b4c49d7899bb4bc0f31-merged.mount: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254681]: 2026-02-02 17:50:22.391447952 +0000 UTC m=+0.613041506 container remove 6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hugle, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:50:22 np0005605476 systemd[1]: libpod-conmon-6fea812e2ce63969dbae4d0d3572bb10a26315a4b6aa7c9296c84e84e9dd49e4.scope: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.746169227 +0000 UTC m=+0.036573307 container create f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:50:22 np0005605476 systemd[1]: Started libpod-conmon-f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08.scope.
Feb  2 12:50:22 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.801067708 +0000 UTC m=+0.091471808 container init f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.806526571 +0000 UTC m=+0.096930651 container start f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:50:22 np0005605476 awesome_moser[254810]: 167 167
Feb  2 12:50:22 np0005605476 systemd[1]: libpod-f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08.scope: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.810965976 +0000 UTC m=+0.101370056 container attach f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.811432999 +0000 UTC m=+0.101837069 container died f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.727773251 +0000 UTC m=+0.018177381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:22 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c65b2d7b0baf59d8923a9bdf3117f490ac4753f89f85bd76fcb5a937fb66f0bb-merged.mount: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254793]: 2026-02-02 17:50:22.840762672 +0000 UTC m=+0.131166782 container remove f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_moser, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 12:50:22 np0005605476 systemd[1]: libpod-conmon-f16c03e428198a466c95b024676593745cc1234925d922e7d6ab4f173d7d6f08.scope: Deactivated successfully.
Feb  2 12:50:22 np0005605476 podman[254833]: 2026-02-02 17:50:22.981128661 +0000 UTC m=+0.054624194 container create 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:50:23 np0005605476 systemd[1]: Started libpod-conmon-5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5.scope.
Feb  2 12:50:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59f6e56be1fb9f41472db55444d026cfe1b74ca5c830fb42590ba44c9f547f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59f6e56be1fb9f41472db55444d026cfe1b74ca5c830fb42590ba44c9f547f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59f6e56be1fb9f41472db55444d026cfe1b74ca5c830fb42590ba44c9f547f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59f6e56be1fb9f41472db55444d026cfe1b74ca5c830fb42590ba44c9f547f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:22.957591101 +0000 UTC m=+0.031086724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:23.062154765 +0000 UTC m=+0.135650318 container init 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:23.068336249 +0000 UTC m=+0.141831812 container start 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:23.071430755 +0000 UTC m=+0.144926308 container attach 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]: {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    "0": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "devices": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "/dev/loop3"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            ],
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_name": "ceph_lv0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_size": "21470642176",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "name": "ceph_lv0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "tags": {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_name": "ceph",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.crush_device_class": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.encrypted": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.objectstore": "bluestore",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_id": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.vdo": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.with_tpm": "0"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            },
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "vg_name": "ceph_vg0"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        }
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    ],
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    "1": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "devices": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "/dev/loop4"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            ],
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_name": "ceph_lv1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_size": "21470642176",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "name": "ceph_lv1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "tags": {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_name": "ceph",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.crush_device_class": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.encrypted": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.objectstore": "bluestore",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_id": "1",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.vdo": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.with_tpm": "0"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            },
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "vg_name": "ceph_vg1"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        }
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    ],
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    "2": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "devices": [
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "/dev/loop5"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            ],
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_name": "ceph_lv2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_size": "21470642176",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "name": "ceph_lv2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "tags": {
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.cluster_name": "ceph",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.crush_device_class": "",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.encrypted": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.objectstore": "bluestore",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osd_id": "2",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.vdo": "0",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:                "ceph.with_tpm": "0"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            },
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "type": "block",
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:            "vg_name": "ceph_vg2"
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:        }
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]:    ]
Feb  2 12:50:23 np0005605476 stoic_goldwasser[254850]: }
Feb  2 12:50:23 np0005605476 systemd[1]: libpod-5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5.scope: Deactivated successfully.
Feb  2 12:50:23 np0005605476 conmon[254850]: conmon 5141e09a45e2c0a36656 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5.scope/container/memory.events
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:23.371948469 +0000 UTC m=+0.445444052 container died 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:50:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a59f6e56be1fb9f41472db55444d026cfe1b74ca5c830fb42590ba44c9f547f3-merged.mount: Deactivated successfully.
Feb  2 12:50:23 np0005605476 podman[254833]: 2026-02-02 17:50:23.407910369 +0000 UTC m=+0.481405912 container remove 5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:50:23 np0005605476 systemd[1]: libpod-conmon-5141e09a45e2c0a36656c51fb364ff1c427119dc1ca556d0733eeb36b3ea07d5.scope: Deactivated successfully.
Feb  2 12:50:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.6 KiB/s wr, 83 op/s
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.823359928 +0000 UTC m=+0.039641684 container create ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:50:23 np0005605476 systemd[1]: Started libpod-conmon-ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d.scope.
Feb  2 12:50:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.882987081 +0000 UTC m=+0.099268837 container init ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.887653932 +0000 UTC m=+0.103935728 container start ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.891324065 +0000 UTC m=+0.107605841 container attach ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:50:23 np0005605476 systemd[1]: libpod-ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d.scope: Deactivated successfully.
Feb  2 12:50:23 np0005605476 xenodochial_kalam[254948]: 167 167
Feb  2 12:50:23 np0005605476 conmon[254948]: conmon ac2fc14e332d3b31852a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d.scope/container/memory.events
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.892751435 +0000 UTC m=+0.109033221 container died ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.80383125 +0000 UTC m=+0.020113016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:23 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ebf32abcbbf4baa881fd07e1c4ca08d5f1846866ba0fc1c70e862b6541b55fd9-merged.mount: Deactivated successfully.
Feb  2 12:50:23 np0005605476 podman[254932]: 2026-02-02 17:50:23.930636119 +0000 UTC m=+0.146917865 container remove ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:50:23 np0005605476 systemd[1]: libpod-conmon-ac2fc14e332d3b31852abf78046f93efdf1cbbbca56356503eeb8ae1ff856a4d.scope: Deactivated successfully.
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.066416958 +0000 UTC m=+0.061907827 container create 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 12:50:24 np0005605476 nova_compute[239846]: 2026-02-02 17:50:24.075 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:24 np0005605476 systemd[1]: Started libpod-conmon-926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f.scope.
Feb  2 12:50:24 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2691c42dba58f40c36f70ef39ab2f3ffecd6ce1fbf81408bf6c049310fbc42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2691c42dba58f40c36f70ef39ab2f3ffecd6ce1fbf81408bf6c049310fbc42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2691c42dba58f40c36f70ef39ab2f3ffecd6ce1fbf81408bf6c049310fbc42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2691c42dba58f40c36f70ef39ab2f3ffecd6ce1fbf81408bf6c049310fbc42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.034155753 +0000 UTC m=+0.029646712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.143149352 +0000 UTC m=+0.138640251 container init 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.158036679 +0000 UTC m=+0.153527548 container start 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.161478576 +0000 UTC m=+0.156969445 container attach 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:50:24 np0005605476 lvm[255068]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:50:24 np0005605476 lvm[255068]: VG ceph_vg1 finished
Feb  2 12:50:24 np0005605476 lvm[255067]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:50:24 np0005605476 lvm[255067]: VG ceph_vg0 finished
Feb  2 12:50:24 np0005605476 lvm[255070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:50:24 np0005605476 lvm[255070]: VG ceph_vg2 finished
Feb  2 12:50:24 np0005605476 nova_compute[239846]: 2026-02-02 17:50:24.786 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:24 np0005605476 happy_wiles[254989]: {}
Feb  2 12:50:24 np0005605476 systemd[1]: libpod-926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f.scope: Deactivated successfully.
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.852093528 +0000 UTC m=+0.847584417 container died 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:50:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd2691c42dba58f40c36f70ef39ab2f3ffecd6ce1fbf81408bf6c049310fbc42-merged.mount: Deactivated successfully.
Feb  2 12:50:24 np0005605476 podman[254972]: 2026-02-02 17:50:24.890876006 +0000 UTC m=+0.886366885 container remove 926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:50:24 np0005605476 systemd[1]: libpod-conmon-926b7c6157d7422ae57d67ae30bf510f8185dafeeb0426fa70f0f04b9140e10f.scope: Deactivated successfully.
Feb  2 12:50:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:50:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:50:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:25 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:25.098 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:50:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1153831777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1153831777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1705106891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1705106891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.2 KiB/s wr, 12 op/s
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412407582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/517834063' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/517834063' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:29 np0005605476 nova_compute[239846]: 2026-02-02 17:50:29.077 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb  2 12:50:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb  2 12:50:29 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb  2 12:50:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb  2 12:50:29 np0005605476 nova_compute[239846]: 2026-02-02 17:50:29.788 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb  2 12:50:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb  2 12:50:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb  2 12:50:30 np0005605476 nova_compute[239846]: 2026-02-02 17:50:30.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.4 KiB/s wr, 80 op/s
Feb  2 12:50:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2751848099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:33 np0005605476 nova_compute[239846]: 2026-02-02 17:50:33.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:33 np0005605476 nova_compute[239846]: 2026-02-02 17:50:33.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:50:33 np0005605476 nova_compute[239846]: 2026-02-02 17:50:33.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:50:33 np0005605476 nova_compute[239846]: 2026-02-02 17:50:33.291 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:50:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.7 KiB/s wr, 78 op/s
Feb  2 12:50:34 np0005605476 nova_compute[239846]: 2026-02-02 17:50:34.079 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:34 np0005605476 nova_compute[239846]: 2026-02-02 17:50:34.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:34 np0005605476 nova_compute[239846]: 2026-02-02 17:50:34.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:34 np0005605476 nova_compute[239846]: 2026-02-02 17:50:34.790 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.283 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.283 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.283 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.284 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.284 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 344 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 32 MiB/s wr, 89 op/s
Feb  2 12:50:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:50:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915835861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:50:35 np0005605476 nova_compute[239846]: 2026-02-02 17:50:35.842 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.023 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.024 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4598MB free_disk=59.98816792666912GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.024 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.025 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.180 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.180 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.200 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.231 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.232 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.248 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.275 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.302 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:50:36
Feb  2 12:50:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:50:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:50:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes']
Feb  2 12:50:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:50:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:50:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210602144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.818 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.824 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.855 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.880 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:50:36 np0005605476 nova_compute[239846]: 2026-02-02 17:50:36.880 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 520 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 52 MiB/s wr, 205 op/s
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:50:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.640 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "610064dc-da47-4fb7-b1ed-20f04ec73639" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.641 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.662 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.737 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.737 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.742 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.742 239853 INFO nova.compute.claims [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:50:38 np0005605476 nova_compute[239846]: 2026-02-02 17:50:38.847 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.082 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:50:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145080545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.384 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.388 239853 DEBUG nova.compute.provider_tree [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.405 239853 DEBUG nova.scheduler.client.report [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.427 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.428 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:50:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 664 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 58 MiB/s wr, 174 op/s
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.479 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.480 239853 DEBUG nova.network.neutron [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.500 239853 INFO nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.517 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.596 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.598 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.598 239853 INFO nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Creating image(s)#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.620 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.644 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.666 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.669 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb  2 12:50:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb  2 12:50:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.720 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.721 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.722 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.722 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.740 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.744 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 610064dc-da47-4fb7-b1ed-20f04ec73639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.792 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.820 239853 DEBUG nova.network.neutron [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.820 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:50:39 np0005605476 nova_compute[239846]: 2026-02-02 17:50:39.954 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 610064dc-da47-4fb7-b1ed-20f04ec73639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.004 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] resizing rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.081 239853 DEBUG nova.objects.instance [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lazy-loading 'migration_context' on Instance uuid 610064dc-da47-4fb7-b1ed-20f04ec73639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:50:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.103 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.103 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Ensure instance console log exists: /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.103 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.104 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.104 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.106 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.109 239853 WARNING nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.113 239853 DEBUG nova.virt.libvirt.host [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.113 239853 DEBUG nova.virt.libvirt.host [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.116 239853 DEBUG nova.virt.libvirt.host [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.116 239853 DEBUG nova.virt.libvirt.host [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.116 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.117 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.117 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.117 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.118 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.119 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.119 239853 DEBUG nova.virt.hardware [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.121 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870208174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.618 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.640 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.643 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.882 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.882 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.882 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:50:40 np0005605476 nova_compute[239846]: 2026-02-02 17:50:40.882 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:50:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536488197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.155 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.156 239853 DEBUG nova.objects.instance [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 610064dc-da47-4fb7-b1ed-20f04ec73639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.170 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <uuid>610064dc-da47-4fb7-b1ed-20f04ec73639</uuid>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <name>instance-0000000a</name>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:name>tempest-VolumesNegativeTest-instance-1813611999</nova:name>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:50:40</nova:creationTime>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:user uuid="cbcc5bdf45d541d6ba187d5d7a2f80dc">tempest-VolumesNegativeTest-872757425-project-member</nova:user>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <nova:project uuid="663a83f622294d2cb0da1b977a9dfd64">tempest-VolumesNegativeTest-872757425</nova:project>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <nova:ports/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="serial">610064dc-da47-4fb7-b1ed-20f04ec73639</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="uuid">610064dc-da47-4fb7-b1ed-20f04ec73639</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/610064dc-da47-4fb7-b1ed-20f04ec73639_disk">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/console.log" append="off"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:50:41 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:50:41 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:50:41 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:50:41 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.220 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.220 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.221 239853 INFO nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Using config drive#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.237 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.357 239853 INFO nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Creating config drive at /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.361 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcvkwzetf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 104 MiB/s wr, 268 op/s
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.479 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcvkwzetf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.510 239853 DEBUG nova.storage.rbd_utils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] rbd image 610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.515 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config 610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.617 239853 DEBUG oslo_concurrency.processutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config 610064dc-da47-4fb7-b1ed-20f04ec73639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:41 np0005605476 nova_compute[239846]: 2026-02-02 17:50:41.618 239853 INFO nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Deleting local config drive /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639/disk.config because it was imported into RBD.#033[00m
Feb  2 12:50:41 np0005605476 systemd-machined[208080]: New machine qemu-10-instance-0000000a.
Feb  2 12:50:41 np0005605476 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.049 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054642.0495913, 610064dc-da47-4fb7-b1ed-20f04ec73639 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.050 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.053 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.053 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.056 239853 INFO nova.virt.libvirt.driver [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance spawned successfully.#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.056 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.072 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.076 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.080 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.080 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.081 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.081 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.081 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.082 239853 DEBUG nova.virt.libvirt.driver [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.107 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.108 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054642.0515168, 610064dc-da47-4fb7-b1ed-20f04ec73639 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.108 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] VM Started (Lifecycle Event)#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.136 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.139 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.146 239853 INFO nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Took 2.55 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.146 239853 DEBUG nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.155 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.198 239853 INFO nova.compute.manager [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Took 3.49 seconds to build instance.#033[00m
Feb  2 12:50:42 np0005605476 nova_compute[239846]: 2026-02-02 17:50:42.214 239853 DEBUG oslo_concurrency.lockutils [None req-2905424e-6a46-4bec-b403-a1ce502ce04f cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.367 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.367 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/869920575' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.383 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:50:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 104 MiB/s wr, 268 op/s
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.471 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "610064dc-da47-4fb7-b1ed-20f04ec73639" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.472 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.472 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "610064dc-da47-4fb7-b1ed-20f04ec73639-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.472 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.473 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.474 239853 INFO nova.compute.manager [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Terminating instance#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.476 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "refresh_cache-610064dc-da47-4fb7-b1ed-20f04ec73639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.476 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquired lock "refresh_cache-610064dc-da47-4fb7-b1ed-20f04ec73639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.476 239853 DEBUG nova.network.neutron [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.492 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.492 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.500 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.501 239853 INFO nova.compute.claims [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:50:43 np0005605476 podman[255520]: 2026-02-02 17:50:43.603779978 +0000 UTC m=+0.052909796 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Feb  2 12:50:43 np0005605476 nova_compute[239846]: 2026-02-02 17:50:43.632 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb  2 12:50:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb  2 12:50:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.084 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:50:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183328997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.147 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.152 239853 DEBUG nova.compute.provider_tree [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.173 239853 DEBUG nova.scheduler.client.report [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.192 239853 DEBUG nova.network.neutron [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.205 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.206 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.249 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.249 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.263 239853 INFO nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.284 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.400 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.401 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.402 239853 INFO nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Creating image(s)#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.420 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.443 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.462 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.465 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.544 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.546 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.546 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.546 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.567 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.571 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.620 239853 DEBUG nova.policy [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '35a3cbbc2e32427f9356703501969892', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e9c44462f87f421099e0b0d1376904c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:50:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb  2 12:50:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.793 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.831 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.891 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] resizing rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.956 239853 DEBUG nova.objects.instance [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'migration_context' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.970 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.971 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Ensure instance console log exists: /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.971 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.971 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:44 np0005605476 nova_compute[239846]: 2026-02-02 17:50:44.972 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.013 239853 DEBUG nova.network.neutron [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.035 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Releasing lock "refresh_cache-610064dc-da47-4fb7-b1ed-20f04ec73639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.035 239853 DEBUG nova.compute.manager [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:50:45 np0005605476 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb  2 12:50:45 np0005605476 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 3.402s CPU time.
Feb  2 12:50:45 np0005605476 systemd-machined[208080]: Machine qemu-10-instance-0000000a terminated.
Feb  2 12:50:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.253 239853 INFO nova.virt.libvirt.driver [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance destroyed successfully.#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.253 239853 DEBUG nova.objects.instance [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lazy-loading 'resources' on Instance uuid 610064dc-da47-4fb7-b1ed-20f04ec73639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:50:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 78 MiB/s wr, 376 op/s
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.504 239853 INFO nova.virt.libvirt.driver [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Deleting instance files /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639_del#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.505 239853 INFO nova.virt.libvirt.driver [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Deletion of /var/lib/nova/instances/610064dc-da47-4fb7-b1ed-20f04ec73639_del complete#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.718 239853 INFO nova.compute.manager [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.719 239853 DEBUG oslo.service.loopingcall [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.720 239853 DEBUG nova.compute.manager [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:50:45 np0005605476 nova_compute[239846]: 2026-02-02 17:50:45.720 239853 DEBUG nova.network.neutron [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.059 239853 DEBUG nova.network.neutron [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.071 239853 DEBUG nova.network.neutron [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.090 239853 INFO nova.compute.manager [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Took 0.37 seconds to deallocate network for instance.#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.151 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.152 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.222 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Successfully created port: 82cd628a-7fae-47cb-ba3b-d2c670304572 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.230 239853 DEBUG oslo_concurrency.processutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:46.640 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:46.640 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:46.640 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:46 np0005605476 podman[255769]: 2026-02-02 17:50:46.644827402 +0000 UTC m=+0.092258690 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb  2 12:50:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:50:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862564113' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.725 239853 DEBUG oslo_concurrency.processutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.729 239853 DEBUG nova.compute.provider_tree [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.743 239853 DEBUG nova.scheduler.client.report [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.769 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063154923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.812 239853 INFO nova.scheduler.client.report [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Deleted allocations for instance 610064dc-da47-4fb7-b1ed-20f04ec73639#033[00m
Feb  2 12:50:46 np0005605476 nova_compute[239846]: 2026-02-02 17:50:46.874 239853 DEBUG oslo_concurrency.lockutils [None req-094cf6f1-5be3-4b0b-aa25-a9abf74b8ef9 cbcc5bdf45d541d6ba187d5d7a2f80dc 663a83f622294d2cb0da1b977a9dfd64 - - default default] Lock "610064dc-da47-4fb7-b1ed-20f04ec73639" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.016 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Successfully updated port: 82cd628a-7fae-47cb-ba3b-d2c670304572 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.030 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.031 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquired lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.031 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.100 239853 DEBUG nova.compute.manager [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.100 239853 DEBUG nova.compute.manager [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing instance network info cache due to event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.101 239853 DEBUG oslo_concurrency.lockutils [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.145 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 30 MiB/s wr, 366 op/s
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00029868991717918393 of space, bias 1.0, pg target 0.08960697515375518 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.017025614588988932 of space, bias 1.0, pg target 5.10768437669668 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.533846374817749e-06 of space, bias 1.0, pg target 0.00045248468057123594 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660229294857585 of space, bias 1.0, pg target 0.19647676419829876 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.713788839773546e-07 of space, bias 4.0, pg target 0.0009102270830932783 quantized to 16 (current 16)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011255555284235201 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012381110812658724 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:50:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.736 239853 DEBUG nova.network.neutron [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.758 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Releasing lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.758 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Instance network_info: |[{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.759 239853 DEBUG oslo_concurrency.lockutils [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.759 239853 DEBUG nova.network.neutron [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.763 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Start _get_guest_xml network_info=[{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.768 239853 WARNING nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.772 239853 DEBUG nova.virt.libvirt.host [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.773 239853 DEBUG nova.virt.libvirt.host [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.780 239853 DEBUG nova.virt.libvirt.host [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.780 239853 DEBUG nova.virt.libvirt.host [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.781 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.781 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.782 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.782 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.782 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.782 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.782 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.783 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.783 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.783 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.783 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.783 239853 DEBUG nova.virt.hardware [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:50:47 np0005605476 nova_compute[239846]: 2026-02-02 17:50:47.786 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb  2 12:50:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb  2 12:50:47 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634783337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.313 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.345 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.350 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.801 239853 DEBUG nova.network.neutron [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updated VIF entry in instance network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.802 239853 DEBUG nova.network.neutron [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.818 239853 DEBUG oslo_concurrency.lockutils [req-00d08932-3ee7-4b3e-96ef-63dfcb46256e req-d4ab07e3-44ba-49de-978f-9b71b30e6f64 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868458702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.881 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.882 239853 DEBUG nova.virt.libvirt.vif [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1184645063',display_name='tempest-TestStampPattern-server-1184645063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1184645063',id=11,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-m0cz5nux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:50:44Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=c29c7ea2-29c6-40eb-a75b-289e533ecc64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.882 239853 DEBUG nova.network.os_vif_util [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.883 239853 DEBUG nova.network.os_vif_util [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.884 239853 DEBUG nova.objects.instance [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.902 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <uuid>c29c7ea2-29c6-40eb-a75b-289e533ecc64</uuid>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <name>instance-0000000b</name>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestStampPattern-server-1184645063</nova:name>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:50:47</nova:creationTime>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:user uuid="35a3cbbc2e32427f9356703501969892">tempest-TestStampPattern-468537565-project-member</nova:user>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:project uuid="e9c44462f87f421099e0b0d1376904c4">tempest-TestStampPattern-468537565</nova:project>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <nova:port uuid="82cd628a-7fae-47cb-ba3b-d2c670304572">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="serial">c29c7ea2-29c6-40eb-a75b-289e533ecc64</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="uuid">c29c7ea2-29c6-40eb-a75b-289e533ecc64</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:be:0a:fe"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <target dev="tap82cd628a-7f"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/console.log" append="off"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:50:48 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:50:48 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:50:48 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:50:48 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.902 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Preparing to wait for external event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.903 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.903 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.903 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.904 239853 DEBUG nova.virt.libvirt.vif [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1184645063',display_name='tempest-TestStampPattern-server-1184645063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1184645063',id=11,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-m0cz5nux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:50:44Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=c29c7ea2-29c6-40eb-a75b-289e533ecc64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.904 239853 DEBUG nova.network.os_vif_util [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.905 239853 DEBUG nova.network.os_vif_util [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.905 239853 DEBUG os_vif [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.905 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.906 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.906 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.909 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.909 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82cd628a-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.909 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82cd628a-7f, col_values=(('external_ids', {'iface-id': '82cd628a-7fae-47cb-ba3b-d2c670304572', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:0a:fe', 'vm-uuid': 'c29c7ea2-29c6-40eb-a75b-289e533ecc64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.910 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:48 np0005605476 NetworkManager[49022]: <info>  [1770054648.9115] manager: (tap82cd628a-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.913 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.915 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.917 239853 INFO os_vif [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f')#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.962 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.962 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.962 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No VIF found with MAC fa:16:3e:be:0a:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.963 239853 INFO nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Using config drive#033[00m
Feb  2 12:50:48 np0005605476 nova_compute[239846]: 2026-02-02 17:50:48.978 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.434 239853 INFO nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Creating config drive at /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.437 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr7z3u79h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 316 op/s
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.556 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr7z3u79h" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.583 239853 DEBUG nova.storage.rbd_utils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.586 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.704 239853 DEBUG oslo_concurrency.processutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.705 239853 INFO nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Deleting local config drive /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64/disk.config because it was imported into RBD.#033[00m
Feb  2 12:50:49 np0005605476 kernel: tap82cd628a-7f: entered promiscuous mode
Feb  2 12:50:49 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:49Z|00115|binding|INFO|Claiming lport 82cd628a-7fae-47cb-ba3b-d2c670304572 for this chassis.
Feb  2 12:50:49 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:49Z|00116|binding|INFO|82cd628a-7fae-47cb-ba3b-d2c670304572: Claiming fa:16:3e:be:0a:fe 10.100.0.3
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.7366] manager: (tap82cd628a-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.735 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.739 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.742 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.750 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:0a:fe 10.100.0.3'], port_security=['fa:16:3e:be:0a:fe 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c29c7ea2-29c6-40eb-a75b-289e533ecc64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9c44462f87f421099e0b0d1376904c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c63f7b3b-d1b7-480e-bc0f-69ad7c8d6195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e81d0e0c-73b2-43ee-93af-f299a40e5ded, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=82cd628a-7fae-47cb-ba3b-d2c670304572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.751 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 82cd628a-7fae-47cb-ba3b-d2c670304572 in datapath 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 bound to our chassis#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.752 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6#033[00m
Feb  2 12:50:49 np0005605476 systemd-machined[208080]: New machine qemu-11-instance-0000000b.
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.758 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f55ee0e7-c858-4194-a0a5-48d9c04a1ec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.759 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27d3f0a2-71 in ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.760 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27d3f0a2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.760 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[365dad0b-946d-4137-aa5c-d35b5ee47fe0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.761 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5857ea15-0ec7-494d-bafb-781da484c874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.769 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[368ffa17-c8db-4db1-95c4-005fee328fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb  2 12:50:49 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:49Z|00117|binding|INFO|Setting lport 82cd628a-7fae-47cb-ba3b-d2c670304572 ovn-installed in OVS
Feb  2 12:50:49 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:49Z|00118|binding|INFO|Setting lport 82cd628a-7fae-47cb-ba3b-d2c670304572 up in Southbound
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.780 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 systemd-udevd[255935]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.790 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[db998185-5529-4c94-b98e-34c3104fda61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.795 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.7980] device (tap82cd628a-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.7990] device (tap82cd628a-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.807 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d5bb17-7c8f-4bfb-a761-a5f4e477f98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 systemd-udevd[255938]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.8116] manager: (tap27d3f0a2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.811 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[549af7fb-37f5-4ca0-af55-f194629abfe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.831 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[128afe4f-09de-407a-a3cc-0b1bc2816453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.833 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4e6215-d4fc-46cb-9319-7593d93ab39e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.8492] device (tap27d3f0a2-70): carrier: link connected
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.852 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb933f9-e73b-4189-85f5-d5c575a521c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.864 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7adf3b25-6694-4180-a78c-d3461f21cde5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27d3f0a2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:1e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389118, 'reachable_time': 20773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255966, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.875 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f7e3fa-dea2-4d45-89e8-21436758de20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:1e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389118, 'tstamp': 389118}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255967, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.887 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8dee80c6-c508-408f-9102-bd50151cad6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27d3f0a2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:1e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389118, 'reachable_time': 20773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255968, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.913 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[44ff446c-31f0-465e-9471-6df5242e4fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.955 239853 DEBUG nova.compute.manager [req-26b80f10-9990-4639-9006-e4ddb82b6f65 req-cf8545cc-9730-42a4-ac31-2c5550797873 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.955 239853 DEBUG oslo_concurrency.lockutils [req-26b80f10-9990-4639-9006-e4ddb82b6f65 req-cf8545cc-9730-42a4-ac31-2c5550797873 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.956 239853 DEBUG oslo_concurrency.lockutils [req-26b80f10-9990-4639-9006-e4ddb82b6f65 req-cf8545cc-9730-42a4-ac31-2c5550797873 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.956 239853 DEBUG oslo_concurrency.lockutils [req-26b80f10-9990-4639-9006-e4ddb82b6f65 req-cf8545cc-9730-42a4-ac31-2c5550797873 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.956 239853 DEBUG nova.compute.manager [req-26b80f10-9990-4639-9006-e4ddb82b6f65 req-cf8545cc-9730-42a4-ac31-2c5550797873 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Processing event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.971 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ae0abd-cb91-4318-85af-44ea72a579c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.974 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d3f0a2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.974 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.975 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d3f0a2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.977 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 NetworkManager[49022]: <info>  [1770054649.9788] manager: (tap27d3f0a2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb  2 12:50:49 np0005605476 kernel: tap27d3f0a2-70: entered promiscuous mode
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.981 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:49.982 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27d3f0a2-70, col_values=(('external_ids', {'iface-id': 'feaa395a-f5d1-49f8-90b4-f45ef83f72dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:50:49 np0005605476 nova_compute[239846]: 2026-02-02 17:50:49.983 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:49 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:49Z|00119|binding|INFO|Releasing lport feaa395a-f5d1-49f8-90b4-f45ef83f72dd from this chassis (sb_readonly=0)
Feb  2 12:50:50 np0005605476 nova_compute[239846]: 2026-02-02 17:50:50.026 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:50.027 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:50.028 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb1337-ee8d-45fb-b9ce-ad2892c50cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:50.028 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6.pid.haproxy
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:50:50 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:50:50.029 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'env', 'PROCESS_TAG=haproxy-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:50:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/787346833' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:50:50 np0005605476 podman[256000]: 2026-02-02 17:50:50.320838206 +0000 UTC m=+0.041899037 container create 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:50:50 np0005605476 systemd[1]: Started libpod-conmon-76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae.scope.
Feb  2 12:50:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:50:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe45d0d5151e721dabccbb40d23cc4499ae504a0706b19625213f34cacefbb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:50:50 np0005605476 podman[256000]: 2026-02-02 17:50:50.297334646 +0000 UTC m=+0.018395487 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:50:50 np0005605476 podman[256000]: 2026-02-02 17:50:50.398704931 +0000 UTC m=+0.119765752 container init 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 12:50:50 np0005605476 podman[256000]: 2026-02-02 17:50:50.405264615 +0000 UTC m=+0.126325456 container start 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb  2 12:50:50 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [NOTICE]   (256019) : New worker (256021) forked
Feb  2 12:50:50 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [NOTICE]   (256019) : Loading success.
Feb  2 12:50:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb  2 12:50:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb  2 12:50:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb  2 12:50:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 10 MiB/s wr, 250 op/s
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.508 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054651.5059314, c29c7ea2-29c6-40eb-a75b-289e533ecc64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.509 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] VM Started (Lifecycle Event)#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.512 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.517 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.522 239853 INFO nova.virt.libvirt.driver [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Instance spawned successfully.#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.523 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.528 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.536 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.550 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.551 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.552 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.552 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.553 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.553 239853 DEBUG nova.virt.libvirt.driver [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.561 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.562 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054651.5077784, c29c7ea2-29c6-40eb-a75b-289e533ecc64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.562 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.596 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.601 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054651.5150173, c29c7ea2-29c6-40eb-a75b-289e533ecc64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.601 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.638 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.643 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.650 239853 INFO nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.651 239853 DEBUG nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.666 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.719 239853 INFO nova.compute.manager [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Took 8.25 seconds to build instance.#033[00m
Feb  2 12:50:51 np0005605476 nova_compute[239846]: 2026-02-02 17:50:51.736 239853 DEBUG oslo_concurrency.lockutils [None req-7eb2db34-b2ec-4be9-a402-6c102a5b7dc5 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.063 239853 DEBUG nova.compute.manager [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.064 239853 DEBUG oslo_concurrency.lockutils [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.064 239853 DEBUG oslo_concurrency.lockutils [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.064 239853 DEBUG oslo_concurrency.lockutils [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.064 239853 DEBUG nova.compute.manager [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] No waiting events found dispatching network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:50:52 np0005605476 nova_compute[239846]: 2026-02-02 17:50:52.064 239853 WARNING nova.compute.manager [req-db11e327-1364-4eee-93af-a8d81cc7fe99 req-2542bea2-bcd0-4c28-b676-b7ef98075da2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received unexpected event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:50:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb  2 12:50:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb  2 12:50:53 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb  2 12:50:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 8.7 MiB/s wr, 217 op/s
Feb  2 12:50:53 np0005605476 nova_compute[239846]: 2026-02-02 17:50:53.913 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:54 np0005605476 nova_compute[239846]: 2026-02-02 17:50:54.796 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:50:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb  2 12:50:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb  2 12:50:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb  2 12:50:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 1.6 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 90 MiB/s wr, 475 op/s
Feb  2 12:50:55 np0005605476 NetworkManager[49022]: <info>  [1770054655.6832] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb  2 12:50:55 np0005605476 NetworkManager[49022]: <info>  [1770054655.6845] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.682 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.760 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:50:55Z|00120|binding|INFO|Releasing lport feaa395a-f5d1-49f8-90b4-f45ef83f72dd from this chassis (sb_readonly=0)
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.772 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.946 239853 DEBUG nova.compute.manager [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.946 239853 DEBUG nova.compute.manager [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing instance network info cache due to event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.946 239853 DEBUG oslo_concurrency.lockutils [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.946 239853 DEBUG oslo_concurrency.lockutils [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:50:55 np0005605476 nova_compute[239846]: 2026-02-02 17:50:55.947 239853 DEBUG nova.network.neutron [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:50:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb  2 12:50:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb  2 12:50:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb  2 12:50:57 np0005605476 nova_compute[239846]: 2026-02-02 17:50:57.144 239853 DEBUG nova.network.neutron [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updated VIF entry in instance network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:50:57 np0005605476 nova_compute[239846]: 2026-02-02 17:50:57.144 239853 DEBUG nova.network.neutron [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:50:57 np0005605476 nova_compute[239846]: 2026-02-02 17:50:57.160 239853 DEBUG oslo_concurrency.lockutils [req-68705e43-1e49-4056-985e-014d270ecd10 req-07094873-30b5-442c-a876-f7ac3ce20179 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:50:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 2.0 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 4.0 MiB/s rd, 140 MiB/s wr, 496 op/s
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303055592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303055592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb  2 12:50:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb  2 12:50:58 np0005605476 nova_compute[239846]: 2026-02-02 17:50:58.954 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:50:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 2.1 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 4.1 MiB/s rd, 156 MiB/s wr, 668 op/s
Feb  2 12:50:59 np0005605476 nova_compute[239846]: 2026-02-02 17:50:59.798 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb  2 12:51:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb  2 12:51:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb  2 12:51:00 np0005605476 nova_compute[239846]: 2026-02-02 17:51:00.253 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054645.2519472, 610064dc-da47-4fb7-b1ed-20f04ec73639 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:00 np0005605476 nova_compute[239846]: 2026-02-02 17:51:00.253 239853 INFO nova.compute.manager [-] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:51:00 np0005605476 nova_compute[239846]: 2026-02-02 17:51:00.277 239853 DEBUG nova.compute.manager [None req-99b79afc-9306-4bdc-bf24-802c88d8257d - - - - - -] [instance: 610064dc-da47-4fb7-b1ed-20f04ec73639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 1.9 MiB/s rd, 90 MiB/s wr, 345 op/s
Feb  2 12:51:02 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:02Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:0a:fe 10.100.0.3
Feb  2 12:51:02 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:02Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:0a:fe 10.100.0.3
Feb  2 12:51:03 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 12:51:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 131 KiB/s rd, 22 MiB/s wr, 194 op/s
Feb  2 12:51:03 np0005605476 nova_compute[239846]: 2026-02-02 17:51:03.958 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:04 np0005605476 nova_compute[239846]: 2026-02-02 17:51:04.799 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:51:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2257240636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:51:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:51:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2257240636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:51:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb  2 12:51:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb  2 12:51:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb  2 12:51:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 24 MiB/s wr, 286 op/s
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3169781853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:51:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3169781853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:51:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771646333' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:08 np0005605476 nova_compute[239846]: 2026-02-02 17:51:08.958 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb  2 12:51:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb  2 12:51:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb  2 12:51:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.9 MiB/s wr, 242 op/s
Feb  2 12:51:09 np0005605476 nova_compute[239846]: 2026-02-02 17:51:09.832 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb  2 12:51:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb  2 12:51:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb  2 12:51:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 204 op/s
Feb  2 12:51:11 np0005605476 nova_compute[239846]: 2026-02-02 17:51:11.892 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:11 np0005605476 nova_compute[239846]: 2026-02-02 17:51:11.893 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:11 np0005605476 nova_compute[239846]: 2026-02-02 17:51:11.910 239853 DEBUG nova.objects.instance [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:11 np0005605476 nova_compute[239846]: 2026-02-02 17:51:11.948 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.131 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.132 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.132 239853 INFO nova.compute.manager [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Attaching volume e739afa2-31aa-4cd4-b353-1300d8294fd0 to /dev/vdb#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.358 239853 DEBUG os_brick.utils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.360 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.368 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.368 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d55762-598a-4f90-8635-bf2fca7b2d0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.369 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.375 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.376 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[262dc502-ced9-4c45-8c5f-3ec34b67e4ec]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.377 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.383 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.383 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[43a30b20-b9bb-4cfb-8b8c-fe0aa429df33]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.385 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[84d5187f-2f2c-4d03-8dbf-28c17d2512cd]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.385 239853 DEBUG oslo_concurrency.processutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.402 239853 DEBUG oslo_concurrency.processutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.404 239853 DEBUG os_brick.initiator.connectors.lightos [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.404 239853 DEBUG os_brick.initiator.connectors.lightos [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.405 239853 DEBUG os_brick.initiator.connectors.lightos [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.405 239853 DEBUG os_brick.utils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] <== get_connector_properties: return (45ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:51:12 np0005605476 nova_compute[239846]: 2026-02-02 17:51:12.405 239853 DEBUG nova.virt.block_device [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating existing volume attachment record: 7c9e8165-d9e6-4022-b9c2-feb7c4c2128d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:51:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3471400492' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.206 239853 DEBUG nova.objects.instance [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.229 239853 DEBUG nova.virt.libvirt.driver [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Attempting to attach volume e739afa2-31aa-4cd4-b353-1300d8294fd0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.232 239853 DEBUG nova.virt.libvirt.guest [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-e739afa2-31aa-4cd4-b353-1300d8294fd0">
Feb  2 12:51:13 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:51:13 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:51:13 np0005605476 nova_compute[239846]:  <serial>e739afa2-31aa-4cd4-b353-1300d8294fd0</serial>
Feb  2 12:51:13 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:51:13 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.326 239853 DEBUG nova.virt.libvirt.driver [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.326 239853 DEBUG nova.virt.libvirt.driver [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.327 239853 DEBUG nova.virt.libvirt.driver [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.327 239853 DEBUG nova.virt.libvirt.driver [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No VIF found with MAC fa:16:3e:be:0a:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:51:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 151 op/s
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.589 239853 DEBUG oslo_concurrency.lockutils [None req-cd9cda0c-922a-4497-930d-6afb8a2ae5bf 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:13 np0005605476 nova_compute[239846]: 2026-02-02 17:51:13.962 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:14 np0005605476 podman[256100]: 2026-02-02 17:51:14.653133013 +0000 UTC m=+0.091439787 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 12:51:14 np0005605476 nova_compute[239846]: 2026-02-02 17:51:14.835 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb  2 12:51:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb  2 12:51:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb  2 12:51:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.2 MiB/s wr, 84 op/s
Feb  2 12:51:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:51:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 18K writes, 70K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 6490 syncs, 2.88 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 28.58 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5462 syncs, 2.34 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.199 239853 DEBUG oslo_concurrency.lockutils [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.200 239853 DEBUG oslo_concurrency.lockutils [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.215 239853 INFO nova.compute.manager [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Detaching volume e739afa2-31aa-4cd4-b353-1300d8294fd0#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.326 239853 INFO nova.virt.block_device [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Attempting to driver detach volume e739afa2-31aa-4cd4-b353-1300d8294fd0 from mountpoint /dev/vdb#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.334 239853 DEBUG nova.virt.libvirt.driver [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Attempting to detach device vdb from instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.335 239853 DEBUG nova.virt.libvirt.guest [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-e739afa2-31aa-4cd4-b353-1300d8294fd0">
Feb  2 12:51:16 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <serial>e739afa2-31aa-4cd4-b353-1300d8294fd0</serial>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:51:16 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.340 239853 INFO nova.virt.libvirt.driver [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully detached device vdb from instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 from the persistent domain config.#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.341 239853 DEBUG nova.virt.libvirt.driver [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.341 239853 DEBUG nova.virt.libvirt.guest [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-e739afa2-31aa-4cd4-b353-1300d8294fd0">
Feb  2 12:51:16 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <serial>e739afa2-31aa-4cd4-b353-1300d8294fd0</serial>
Feb  2 12:51:16 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:51:16 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:51:16 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.443 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054676.443579, c29c7ea2-29c6-40eb-a75b-289e533ecc64 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.445 239853 DEBUG nova.virt.libvirt.driver [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.447 239853 INFO nova.virt.libvirt.driver [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully detached device vdb from instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 from the live domain config.#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.615 239853 DEBUG nova.objects.instance [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:16 np0005605476 nova_compute[239846]: 2026-02-02 17:51:16.651 239853 DEBUG oslo_concurrency.lockutils [None req-d4b7c00a-1ad8-4fe7-9fb0-bc3f11ea5748 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:17 np0005605476 nova_compute[239846]: 2026-02-02 17:51:17.198 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 71 op/s
Feb  2 12:51:17 np0005605476 podman[256122]: 2026-02-02 17:51:17.63617936 +0000 UTC m=+0.091161429 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:51:17 np0005605476 nova_compute[239846]: 2026-02-02 17:51:17.928 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:17 np0005605476 nova_compute[239846]: 2026-02-02 17:51:17.928 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:17 np0005605476 nova_compute[239846]: 2026-02-02 17:51:17.943 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.025 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.026 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.034 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.035 239853 INFO nova.compute.claims [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:51:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb  2 12:51:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb  2 12:51:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.142 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678511806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.687 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.694 239853 DEBUG nova.compute.provider_tree [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.712 239853 DEBUG nova.scheduler.client.report [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.750 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.751 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.803 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.803 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.847 239853 INFO nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.890 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.937 239853 INFO nova.virt.block_device [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Booting with volume 44a2b07e-b5a0-4c73-b8f5-1af52e236be8 at /dev/vdb#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.965 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:18 np0005605476 nova_compute[239846]: 2026-02-02 17:51:18.984 239853 DEBUG nova.policy [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91a3ca2bdb8d4c1fbfab4f38d262f4e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07fcb0b617c84dccb0074a9f1c41229e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.071 239853 DEBUG os_brick.utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.072 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.081 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.081 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ff331881-dd8a-493b-a3eb-d56093e0ea7d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.083 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.089 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.089 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3208ba-d66c-4fd7-a7a4-8c0ab94e742c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.091 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.096 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.096 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[adec3d5b-2585-4efb-9437-e105099414bb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.097 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[11f0cdd7-603b-43f4-8d2d-8001fc37780e]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.098 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.116 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.117 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.118 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.118 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.118 239853 DEBUG os_brick.utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.118 239853 DEBUG nova.virt.block_device [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updating existing volume attachment record: 65cbf28f-46c6-4df3-bc29-bbb7f697e37b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.325 239853 DEBUG nova.compute.manager [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.369 239853 INFO nova.compute.manager [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] instance snapshotting#033[00m
Feb  2 12:51:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 901 KiB/s rd, 900 KiB/s wr, 25 op/s
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.555 239853 INFO nova.virt.libvirt.driver [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Beginning live snapshot process#033[00m
Feb  2 12:51:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:51:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 70K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 5951 syncs, 2.96 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 23.30 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4483 syncs, 2.32 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.627 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Successfully created port: 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.673 239853 DEBUG nova.virt.libvirt.imagebackend [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No parent info for 88ad7b87-724c-4a9f-a946-6c9736783609; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.872 239853 DEBUG nova.storage.rbd_utils [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] creating snapshot(0783c4d6fa664c22a35ade77e2cd29bb) on rbd image(c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb  2 12:51:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629113190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:19 np0005605476 nova_compute[239846]: 2026-02-02 17:51:19.903 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.121 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb  2 12:51:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb  2 12:51:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.271 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.273 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.274 239853 INFO nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Creating image(s)#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.295 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.315 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.337 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.340 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.384 239853 DEBUG nova.storage.rbd_utils [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] cloning vms/c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk@0783c4d6fa664c22a35ade77e2cd29bb to images/9440fdc0-af14-4205-993a-98d6bf0736d2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.413 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.414 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.414 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.414 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.434 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.437 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 2d909269-9b7a-4d8c-b385-067b624e50bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.502 239853 DEBUG nova.storage.rbd_utils [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] flattening images/9440fdc0-af14-4205-993a-98d6bf0736d2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.711 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Successfully updated port: 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.751 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.752 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquired lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.752 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.867 239853 DEBUG nova.compute.manager [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-changed-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.867 239853 DEBUG nova.compute.manager [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Refreshing instance network info cache due to event network-changed-4aaf8ce7-0bce-41b5-bc64-ea40a533f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.867 239853 DEBUG oslo_concurrency.lockutils [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.911 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 2d909269-9b7a-4d8c-b385-067b624e50bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.934 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:51:20 np0005605476 nova_compute[239846]: 2026-02-02 17:51:20.985 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] resizing rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.007 239853 DEBUG nova.storage.rbd_utils [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] removing snapshot(0783c4d6fa664c22a35ade77e2cd29bb) on rbd image(c29c7ea2-29c6-40eb-a75b-289e533ecc64_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.064 239853 DEBUG nova.objects.instance [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lazy-loading 'migration_context' on Instance uuid 2d909269-9b7a-4d8c-b385-067b624e50bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.077 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.077 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Ensure instance console log exists: /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.077 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.078 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.078 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb  2 12:51:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb  2 12:51:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb  2 12:51:21 np0005605476 nova_compute[239846]: 2026-02-02 17:51:21.217 239853 DEBUG nova.storage.rbd_utils [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] creating snapshot(snap) on rbd image(9440fdc0-af14-4205-993a-98d6bf0736d2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb  2 12:51:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 23 KiB/s rd, 384 KiB/s wr, 31 op/s
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.041 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:22.042 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:51:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:22.043 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:51:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb  2 12:51:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb  2 12:51:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.394 239853 DEBUG nova.network.neutron [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updating instance_info_cache with network_info: [{"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.421 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Releasing lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.421 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Instance network_info: |[{"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.421 239853 DEBUG oslo_concurrency.lockutils [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.421 239853 DEBUG nova.network.neutron [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Refreshing network info cache for port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.424 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Start _get_guest_xml network_info=[{"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': -1, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '65cbf28f-46c6-4df3-bc29-bbb7f697e37b', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-44a2b07e-b5a0-4c73-b8f5-1af52e236be8', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '44a2b07e-b5a0-4c73-b8f5-1af52e236be8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2d909269-9b7a-4d8c-b385-067b624e50bc', 'attached_at': '', 'detached_at': '', 'volume_id': '44a2b07e-b5a0-4c73-b8f5-1af52e236be8', 'serial': '44a2b07e-b5a0-4c73-b8f5-1af52e236be8'}, 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.429 239853 WARNING nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.434 239853 DEBUG nova.virt.libvirt.host [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.435 239853 DEBUG nova.virt.libvirt.host [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.438 239853 DEBUG nova.virt.libvirt.host [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.438 239853 DEBUG nova.virt.libvirt.host [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.439 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.439 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.439 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.439 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.440 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.441 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.441 239853 DEBUG nova.virt.hardware [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.443 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176402185' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.931 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.953 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:22 np0005605476 nova_compute[239846]: 2026-02-02 17:51:22.958 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 12:51:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 60K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4882 syncs, 3.07 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9331 writes, 36K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 23.64 MB, 0.04 MB/s#012Interval WAL: 9331 writes, 3965 syncs, 2.35 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.456 239853 INFO nova.virt.libvirt.driver [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Snapshot image upload complete#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.457 239853 INFO nova.compute.manager [None req-e45c18f9-48ee-4ba7-ad8a-4c455f240df2 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Took 4.09 seconds to snapshot the instance on the hypervisor.#033[00m
Feb  2 12:51:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 19 KiB/s rd, 411 KiB/s wr, 28 op/s
Feb  2 12:51:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797178572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.542 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.562 239853 DEBUG nova.virt.libvirt.vif [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-391420875',display_name='tempest-instance-391420875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-391420875',id=12,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHTQbKHoOKXgtPCda2P+xfduDnfHo0kDKiWIKzuAWBql1fwUTGWxPvrKc6SHeOWoa2o4Vo/30fD792pb1rUBQr6ZcrY2rdJ0d62PxAhx3ZuIvZX6lb9S0CpqVEoa7Ce+A==',key_name='tempest-keypair-1368486445',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07fcb0b617c84dccb0074a9f1c41229e',ramdisk_id='',reservation_id='r-sme9lh0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-248885617',owner_user_name='tempest-VolumesBackupsTest-248885617-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91a3ca2bdb8d4c1fbfab4f38d262f4e0',uuid=2d909269-9b7a-4d8c-b385-067b624e50bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.563 239853 DEBUG nova.network.os_vif_util [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converting VIF {"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.563 239853 DEBUG nova.network.os_vif_util [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.565 239853 DEBUG nova.objects.instance [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d909269-9b7a-4d8c-b385-067b624e50bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.576 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <uuid>2d909269-9b7a-4d8c-b385-067b624e50bc</uuid>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <name>instance-0000000c</name>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:name>tempest-instance-391420875</nova:name>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:51:22</nova:creationTime>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:user uuid="91a3ca2bdb8d4c1fbfab4f38d262f4e0">tempest-VolumesBackupsTest-248885617-project-member</nova:user>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:project uuid="07fcb0b617c84dccb0074a9f1c41229e">tempest-VolumesBackupsTest-248885617</nova:project>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <nova:port uuid="4aaf8ce7-0bce-41b5-bc64-ea40a533f786">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="serial">2d909269-9b7a-4d8c-b385-067b624e50bc</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="uuid">2d909269-9b7a-4d8c-b385-067b624e50bc</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/2d909269-9b7a-4d8c-b385-067b624e50bc_disk">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-44a2b07e-b5a0-4c73-b8f5-1af52e236be8">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <target dev="vdb" bus="virtio"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <serial>44a2b07e-b5a0-4c73-b8f5-1af52e236be8</serial>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:11:fc:76"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <target dev="tap4aaf8ce7-0b"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/console.log" append="off"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:51:23 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:51:23 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:51:23 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:51:23 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.577 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Preparing to wait for external event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.578 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.578 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.578 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.579 239853 DEBUG nova.virt.libvirt.vif [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-391420875',display_name='tempest-instance-391420875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-391420875',id=12,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHTQbKHoOKXgtPCda2P+xfduDnfHo0kDKiWIKzuAWBql1fwUTGWxPvrKc6SHeOWoa2o4Vo/30fD792pb1rUBQr6ZcrY2rdJ0d62PxAhx3ZuIvZX6lb9S0CpqVEoa7Ce+A==',key_name='tempest-keypair-1368486445',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07fcb0b617c84dccb0074a9f1c41229e',ramdisk_id='',reservation_id='r-sme9lh0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-248885617',owner_user_name='tempest-VolumesBackupsTest-248885617-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91a3ca2bdb8d4c1fbfab4f38d262f4e0',uuid=2d909269-9b7a-4d8c-b385-067b624e50bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.579 239853 DEBUG nova.network.os_vif_util [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converting VIF {"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.579 239853 DEBUG nova.network.os_vif_util [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.580 239853 DEBUG os_vif [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.580 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.581 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.581 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.583 239853 DEBUG nova.network.neutron [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updated VIF entry in instance network info cache for port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.583 239853 DEBUG nova.network.neutron [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updating instance_info_cache with network_info: [{"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.585 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.586 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aaf8ce7-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.586 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4aaf8ce7-0b, col_values=(('external_ids', {'iface-id': '4aaf8ce7-0bce-41b5-bc64-ea40a533f786', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:fc:76', 'vm-uuid': '2d909269-9b7a-4d8c-b385-067b624e50bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.587 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:23 np0005605476 NetworkManager[49022]: <info>  [1770054683.5886] manager: (tap4aaf8ce7-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.589 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.595 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.596 239853 INFO os_vif [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b')#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.598 239853 DEBUG oslo_concurrency.lockutils [req-0b1a7f92-ea4e-4352-a846-936ab9d5e125 req-ec0f25df-6cf1-4467-bbc2-9b7645b6fe32 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.650 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.651 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.651 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.651 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] No VIF found with MAC fa:16:3e:11:fc:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.652 239853 INFO nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Using config drive#033[00m
Feb  2 12:51:23 np0005605476 nova_compute[239846]: 2026-02-02 17:51:23.674 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.017 239853 INFO nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Creating config drive at /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.023 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr3q08hwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.142 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr3q08hwr" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.174 239853 DEBUG nova.storage.rbd_utils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] rbd image 2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.177 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config 2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.288 239853 DEBUG oslo_concurrency.processutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config 2d909269-9b7a-4d8c-b385-067b624e50bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.290 239853 INFO nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Deleting local config drive /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc/disk.config because it was imported into RBD.#033[00m
Feb  2 12:51:24 np0005605476 kernel: tap4aaf8ce7-0b: entered promiscuous mode
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.3273] manager: (tap4aaf8ce7-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Feb  2 12:51:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:24Z|00121|binding|INFO|Claiming lport 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 for this chassis.
Feb  2 12:51:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:24Z|00122|binding|INFO|4aaf8ce7-0bce-41b5-bc64-ea40a533f786: Claiming fa:16:3e:11:fc:76 10.100.0.13
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.328 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.336 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:fc:76 10.100.0.13'], port_security=['fa:16:3e:11:fc:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2d909269-9b7a-4d8c-b385-067b624e50bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8267f865-a42d-418a-8f76-cf395fe72304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07fcb0b617c84dccb0074a9f1c41229e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47bc4c06-2c5a-4139-a520-1f888fa04212', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29d8f34c-033a-4910-b36d-40dce5cc751d, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=4aaf8ce7-0bce-41b5-bc64-ea40a533f786) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.337 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.337 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 in datapath 8267f865-a42d-418a-8f76-cf395fe72304 bound to our chassis#033[00m
Feb  2 12:51:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:24Z|00123|binding|INFO|Setting lport 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 ovn-installed in OVS
Feb  2 12:51:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:24Z|00124|binding|INFO|Setting lport 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 up in Southbound
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.338 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8267f865-a42d-418a-8f76-cf395fe72304#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.339 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.346 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e241fed3-0e31-487d-a9e0-d06901424ca8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.347 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8267f865-a1 in ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.349 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8267f865-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.349 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1369be95-eef4-463a-bee6-43b349d094d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.350 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b755036-3681-40aa-9f34-b645edcc241a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 systemd-udevd[256622]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:51:24 np0005605476 systemd-machined[208080]: New machine qemu-12-instance-0000000c.
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.358 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[671b4a5e-d8aa-482c-8874-8baff2760a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.3658] device (tap4aaf8ce7-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:51:24 np0005605476 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.3668] device (tap4aaf8ce7-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.368 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e9100e22-f7ab-4df1-aabb-5d0cabd1ebd3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.389 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6afd2b83-0d0e-4934-ac93-e2771a05aad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.393 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[349820ba-8f6e-4e0f-ac2c-07fce7116a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.3954] manager: (tap8267f865-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.416 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2723a6-dcd1-4385-9850-0e6632dcb28b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.420 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[db4bc802-6f39-4ab9-8f79-3245cbe24fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.4386] device (tap8267f865-a0): carrier: link connected
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.442 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[67e38889-2d29-4783-b858-8d7e4b1d241c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.459 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2438eb-47e2-43f2-844b-1b5e2f54a3cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8267f865-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:59:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392577, 'reachable_time': 15249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256654, 'error': None, 'target': 'ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.469 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69ae6bfb-aea8-4c0c-8468-4b29ea14e682]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:59ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 392577, 'tstamp': 392577}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256655, 'error': None, 'target': 'ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.483 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[72dfa628-0dea-4a00-bd99-3487cf77e4f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8267f865-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:59:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392577, 'reachable_time': 15249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256656, 'error': None, 'target': 'ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.504 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6c6322-10dd-4b47-b590-69ed8c444847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.540 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a441ef-2b33-4a82-ad80-bc07f24302d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.544 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8267f865-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.544 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.545 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8267f865-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:24 np0005605476 NetworkManager[49022]: <info>  [1770054684.5472] manager: (tap8267f865-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Feb  2 12:51:24 np0005605476 kernel: tap8267f865-a0: entered promiscuous mode
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.546 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.551 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8267f865-a0, col_values=(('external_ids', {'iface-id': '33e52a7a-29c2-407d-8059-71d220280ed9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.552 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:24Z|00125|binding|INFO|Releasing lport 33e52a7a-29c2-407d-8059-71d220280ed9 from this chassis (sb_readonly=0)
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.554 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8267f865-a42d-418a-8f76-cf395fe72304.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8267f865-a42d-418a-8f76-cf395fe72304.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.554 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[29a16fca-278e-40c5-b7bc-00e096863566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.555 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-8267f865-a42d-418a-8f76-cf395fe72304
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/8267f865-a42d-418a-8f76-cf395fe72304.pid.haproxy
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 8267f865-a42d-418a-8f76-cf395fe72304
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:51:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:24.557 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304', 'env', 'PROCESS_TAG=haproxy-8267f865-a42d-418a-8f76-cf395fe72304', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8267f865-a42d-418a-8f76-cf395fe72304.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.561 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.768 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054684.768233, 2d909269-9b7a-4d8c-b385-067b624e50bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.769 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] VM Started (Lifecycle Event)#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.790 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.793 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054684.7684715, 2d909269-9b7a-4d8c-b385-067b624e50bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.793 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.813 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.816 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.837 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:51:24 np0005605476 nova_compute[239846]: 2026-02-02 17:51:24.877 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:24 np0005605476 podman[256749]: 2026-02-02 17:51:24.881264917 +0000 UTC m=+0.043673205 container create 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Feb  2 12:51:24 np0005605476 systemd[1]: Started libpod-conmon-655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9.scope.
Feb  2 12:51:24 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:24 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5afb5dab47848c532eb605f7d2910eb386f1ebf020874af4f94aabe195d474bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:24 np0005605476 podman[256749]: 2026-02-02 17:51:24.951484928 +0000 UTC m=+0.113893236 container init 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb  2 12:51:24 np0005605476 podman[256749]: 2026-02-02 17:51:24.857905863 +0000 UTC m=+0.020314171 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:51:24 np0005605476 podman[256749]: 2026-02-02 17:51:24.955537052 +0000 UTC m=+0.117945330 container start 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:51:24 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [NOTICE]   (256769) : New worker (256771) forked
Feb  2 12:51:24 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [NOTICE]   (256769) : Loading success.
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.038 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:25 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 12:51:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 259 op/s
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.723 239853 DEBUG nova.compute.manager [req-f08ae8ab-39b5-44be-8e2c-b1e36d42080f req-004339af-421a-4712-83c9-4062db3d535d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.724 239853 DEBUG oslo_concurrency.lockutils [req-f08ae8ab-39b5-44be-8e2c-b1e36d42080f req-004339af-421a-4712-83c9-4062db3d535d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.724 239853 DEBUG oslo_concurrency.lockutils [req-f08ae8ab-39b5-44be-8e2c-b1e36d42080f req-004339af-421a-4712-83c9-4062db3d535d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.724 239853 DEBUG oslo_concurrency.lockutils [req-f08ae8ab-39b5-44be-8e2c-b1e36d42080f req-004339af-421a-4712-83c9-4062db3d535d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.724 239853 DEBUG nova.compute.manager [req-f08ae8ab-39b5-44be-8e2c-b1e36d42080f req-004339af-421a-4712-83c9-4062db3d535d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Processing event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.725 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.729 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054685.728935, 2d909269-9b7a-4d8c-b385-067b624e50bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.729 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.731 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.735 239853 INFO nova.virt.libvirt.driver [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Instance spawned successfully.#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.735 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.753 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.757 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.771 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.771 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.772 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.773 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.773 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.774 239853 DEBUG nova.virt.libvirt.driver [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.779 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.836 239853 INFO nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Took 5.56 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.836 239853 DEBUG nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.906 239853 INFO nova.compute.manager [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Took 7.91 seconds to build instance.#033[00m
Feb  2 12:51:25 np0005605476 nova_compute[239846]: 2026-02-02 17:51:25.929 239853 DEBUG oslo_concurrency.lockutils [None req-e49e1557-2e4a-4cbe-bba9-e01b32a533f9 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:51:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.406991186 +0000 UTC m=+0.043434450 container create 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:51:26 np0005605476 systemd[1]: Started libpod-conmon-49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3.scope.
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.384038132 +0000 UTC m=+0.020481426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.495754837 +0000 UTC m=+0.132198111 container init 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.504094961 +0000 UTC m=+0.140538215 container start 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.507726003 +0000 UTC m=+0.144169267 container attach 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:51:26 np0005605476 brave_heisenberg[257009]: 167 167
Feb  2 12:51:26 np0005605476 systemd[1]: libpod-49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3.scope: Deactivated successfully.
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.508982418 +0000 UTC m=+0.145425672 container died 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:51:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e534e0bb40ca905c2ef16f2f1cc465954e9a85b92101390377d38d6819fbd339-merged.mount: Deactivated successfully.
Feb  2 12:51:26 np0005605476 podman[256992]: 2026-02-02 17:51:26.545650507 +0000 UTC m=+0.182093771 container remove 49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:51:26 np0005605476 systemd[1]: libpod-conmon-49f79d51c1a6f9525a566fc4df8b25e10d71ea6eab82b7608d9b5536cce5e8d3.scope: Deactivated successfully.
Feb  2 12:51:26 np0005605476 podman[257031]: 2026-02-02 17:51:26.678834205 +0000 UTC m=+0.037674988 container create 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:51:26 np0005605476 systemd[1]: Started libpod-conmon-33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945.scope.
Feb  2 12:51:26 np0005605476 podman[257031]: 2026-02-02 17:51:26.660356086 +0000 UTC m=+0.019196859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:26 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:26 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:26 np0005605476 podman[257031]: 2026-02-02 17:51:26.77807614 +0000 UTC m=+0.136916913 container init 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb  2 12:51:26 np0005605476 podman[257031]: 2026-02-02 17:51:26.784671615 +0000 UTC m=+0.143512368 container start 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:51:26 np0005605476 podman[257031]: 2026-02-02 17:51:26.789085539 +0000 UTC m=+0.147926312 container attach 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:51:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:51:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:27 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:51:27 np0005605476 crazy_hopper[257048]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:51:27 np0005605476 crazy_hopper[257048]: --> All data devices are unavailable
Feb  2 12:51:27 np0005605476 systemd[1]: libpod-33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945.scope: Deactivated successfully.
Feb  2 12:51:27 np0005605476 podman[257068]: 2026-02-02 17:51:27.280291065 +0000 UTC m=+0.029574371 container died 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 12:51:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6041dfdc64b46bde0d00654ac77ae3456f5545b32b99de7eb6bb7914259c5267-merged.mount: Deactivated successfully.
Feb  2 12:51:27 np0005605476 podman[257068]: 2026-02-02 17:51:27.319465194 +0000 UTC m=+0.068748480 container remove 33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:51:27 np0005605476 systemd[1]: libpod-conmon-33b6cb34feab94d4bb3e65c13aeaa9051c85a9a881a131016fdb728d12039945.scope: Deactivated successfully.
Feb  2 12:51:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 6.5 MiB/s rd, 9.3 MiB/s wr, 209 op/s
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.647 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.649 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.664 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.731769565 +0000 UTC m=+0.043223754 container create 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.762 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.763 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.769 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.769 239853 INFO nova.compute.claims [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:51:27 np0005605476 systemd[1]: Started libpod-conmon-24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4.scope.
Feb  2 12:51:27 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.708312537 +0000 UTC m=+0.019766746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.807903012 +0000 UTC m=+0.119357201 container init 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.812097419 +0000 UTC m=+0.123551608 container start 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:51:27 np0005605476 agitated_mclean[257160]: 167 167
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.819214289 +0000 UTC m=+0.130668498 container attach 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 12:51:27 np0005605476 systemd[1]: libpod-24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4.scope: Deactivated successfully.
Feb  2 12:51:27 np0005605476 conmon[257160]: conmon 24c52c61989c797ce2e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4.scope/container/memory.events
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.822530222 +0000 UTC m=+0.133984441 container died 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 12:51:27 np0005605476 systemd[1]: var-lib-containers-storage-overlay-263f9f48e52420e3cc4d495dcc5804d6099b69874c253d1252602068d72542b5-merged.mount: Deactivated successfully.
Feb  2 12:51:27 np0005605476 podman[257144]: 2026-02-02 17:51:27.860908959 +0000 UTC m=+0.172363158 container remove 24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:51:27 np0005605476 systemd[1]: libpod-conmon-24c52c61989c797ce2e30d011a3b29502198fb1e52112e705b7b866e006e16c4.scope: Deactivated successfully.
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.912 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.960 239853 DEBUG nova.compute.manager [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.962 239853 DEBUG oslo_concurrency.lockutils [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.962 239853 DEBUG oslo_concurrency.lockutils [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.963 239853 DEBUG oslo_concurrency.lockutils [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.963 239853 DEBUG nova.compute.manager [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] No waiting events found dispatching network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:51:27 np0005605476 nova_compute[239846]: 2026-02-02 17:51:27.964 239853 WARNING nova.compute.manager [req-44ddbd95-ec71-404a-afa2-a1927e51850b req-02da7d7c-3730-4dba-a72e-f236507c69a3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received unexpected event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.019692435 +0000 UTC m=+0.041845255 container create 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 12:51:28 np0005605476 systemd[1]: Started libpod-conmon-96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6.scope.
Feb  2 12:51:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc77bf2196f607183ea2d3f67f48504c06db8784f98bebcf95b4da11b0303c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc77bf2196f607183ea2d3f67f48504c06db8784f98bebcf95b4da11b0303c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc77bf2196f607183ea2d3f67f48504c06db8784f98bebcf95b4da11b0303c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc77bf2196f607183ea2d3f67f48504c06db8784f98bebcf95b4da11b0303c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.004991693 +0000 UTC m=+0.027144533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.115423042 +0000 UTC m=+0.137575892 container init 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.123230521 +0000 UTC m=+0.145383341 container start 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.126196944 +0000 UTC m=+0.148349814 container attach 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:51:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005592288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.590 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.644 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.648 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.650 239853 DEBUG nova.compute.provider_tree [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.665 239853 DEBUG nova.scheduler.client.report [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.688 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.689 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:51:28 np0005605476 frosty_cray[257220]: {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    "0": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "devices": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "/dev/loop3"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            ],
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_name": "ceph_lv0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_size": "21470642176",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "name": "ceph_lv0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "tags": {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_name": "ceph",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.crush_device_class": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.encrypted": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.objectstore": "bluestore",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_id": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.vdo": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.with_tpm": "0"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            },
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "vg_name": "ceph_vg0"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        }
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    ],
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    "1": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "devices": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "/dev/loop4"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            ],
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_name": "ceph_lv1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_size": "21470642176",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "name": "ceph_lv1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "tags": {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_name": "ceph",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.crush_device_class": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.encrypted": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.objectstore": "bluestore",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_id": "1",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.vdo": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.with_tpm": "0"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            },
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "vg_name": "ceph_vg1"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        }
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    ],
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    "2": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "devices": [
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "/dev/loop5"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            ],
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_name": "ceph_lv2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_size": "21470642176",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "name": "ceph_lv2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "tags": {
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.cluster_name": "ceph",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.crush_device_class": "",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.encrypted": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.objectstore": "bluestore",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osd_id": "2",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.vdo": "0",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:                "ceph.with_tpm": "0"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            },
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "type": "block",
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:            "vg_name": "ceph_vg2"
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:        }
Feb  2 12:51:28 np0005605476 frosty_cray[257220]:    ]
Feb  2 12:51:28 np0005605476 frosty_cray[257220]: }
Feb  2 12:51:28 np0005605476 systemd[1]: libpod-96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6.scope: Deactivated successfully.
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.728586399 +0000 UTC m=+0.750739229 container died 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:51:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-bfcc77bf2196f607183ea2d3f67f48504c06db8784f98bebcf95b4da11b0303c-merged.mount: Deactivated successfully.
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.765 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.766 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:51:28 np0005605476 podman[257185]: 2026-02-02 17:51:28.773540411 +0000 UTC m=+0.795693231 container remove 96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:51:28 np0005605476 systemd[1]: libpod-conmon-96944922daeb7bb022ba131d0eb6b0719a03520b718487363798b32364b504f6.scope: Deactivated successfully.
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.789 239853 INFO nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.814 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.909 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.912 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.913 239853 INFO nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Creating image(s)#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.935 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.959 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.980 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.985 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "7e9c9033be179494f1918d6f463e82eb1e79eee4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.986 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "7e9c9033be179494f1918d6f463e82eb1e79eee4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:28 np0005605476 nova_compute[239846]: 2026-02-02 17:51:28.991 239853 DEBUG nova.policy [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '35a3cbbc2e32427f9356703501969892', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e9c44462f87f421099e0b0d1376904c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.018 239853 DEBUG nova.compute.manager [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-changed-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.019 239853 DEBUG nova.compute.manager [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Refreshing instance network info cache due to event network-changed-4aaf8ce7-0bce-41b5-bc64-ea40a533f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.019 239853 DEBUG oslo_concurrency.lockutils [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.019 239853 DEBUG oslo_concurrency.lockutils [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.020 239853 DEBUG nova.network.neutron [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Refreshing network info cache for port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.212650104 +0000 UTC m=+0.048202574 container create cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:51:29 np0005605476 systemd[1]: Started libpod-conmon-cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce.scope.
Feb  2 12:51:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.278 239853 DEBUG nova.virt.libvirt.imagebackend [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Image locations are: [{'url': 'rbd://eb48d0ef-3496-563c-b73d-661fb962013e/images/9440fdc0-af14-4205-993a-98d6bf0736d2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://eb48d0ef-3496-563c-b73d-661fb962013e/images/9440fdc0-af14-4205-993a-98d6bf0736d2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.286952729 +0000 UTC m=+0.122505209 container init cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.192337534 +0000 UTC m=+0.027890034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.292656689 +0000 UTC m=+0.128209139 container start cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.296132507 +0000 UTC m=+0.131684967 container attach cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:51:29 np0005605476 boring_euclid[257374]: 167 167
Feb  2 12:51:29 np0005605476 systemd[1]: libpod-cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce.scope: Deactivated successfully.
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.297349381 +0000 UTC m=+0.132901831 container died cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:51:29 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fdd46964ef19adc08a5d40d687a56aef7b40ba3df3fcace2a1cb7558cbaa3c2c-merged.mount: Deactivated successfully.
Feb  2 12:51:29 np0005605476 podman[257358]: 2026-02-02 17:51:29.330290586 +0000 UTC m=+0.165843036 container remove cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:51:29 np0005605476 systemd[1]: libpod-conmon-cd6df38635834330becbd33f829afa277ce5ce554dcc159fb70d6998a78a55ce.scope: Deactivated successfully.
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.341 239853 DEBUG nova.virt.libvirt.imagebackend [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Selected location: {'url': 'rbd://eb48d0ef-3496-563c-b73d-661fb962013e/images/9440fdc0-af14-4205-993a-98d6bf0736d2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.342 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] cloning images/9440fdc0-af14-4205-993a-98d6bf0736d2@snap to None/e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.440 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "7e9c9033be179494f1918d6f463e82eb1e79eee4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:29 np0005605476 podman[257466]: 2026-02-02 17:51:29.474325328 +0000 UTC m=+0.036765443 container create a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:51:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.1 MiB/s rd, 8.2 MiB/s wr, 244 op/s
Feb  2 12:51:29 np0005605476 systemd[1]: Started libpod-conmon-a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a.scope.
Feb  2 12:51:29 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed872948e6f6cae235befd85564e187f0fed8c330303d1cd2f6aea437345b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed872948e6f6cae235befd85564e187f0fed8c330303d1cd2f6aea437345b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed872948e6f6cae235befd85564e187f0fed8c330303d1cd2f6aea437345b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:29 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed872948e6f6cae235befd85564e187f0fed8c330303d1cd2f6aea437345b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:29 np0005605476 podman[257466]: 2026-02-02 17:51:29.459155772 +0000 UTC m=+0.021595907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:51:29 np0005605476 podman[257466]: 2026-02-02 17:51:29.561753621 +0000 UTC m=+0.124193766 container init a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.562 239853 DEBUG nova.objects.instance [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'migration_context' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:29 np0005605476 podman[257466]: 2026-02-02 17:51:29.567993986 +0000 UTC m=+0.130434101 container start a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:51:29 np0005605476 podman[257466]: 2026-02-02 17:51:29.571405762 +0000 UTC m=+0.133845907 container attach a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.577 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.577 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Ensure instance console log exists: /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.577 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.578 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.578 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.879 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:29 np0005605476 nova_compute[239846]: 2026-02-02 17:51:29.930 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Successfully created port: 7f07651d-c620-4f85-b534-2f5cc3d866d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb  2 12:51:30 np0005605476 lvm[257615]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:51:30 np0005605476 lvm[257615]: VG ceph_vg1 finished
Feb  2 12:51:30 np0005605476 lvm[257614]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:51:30 np0005605476 lvm[257614]: VG ceph_vg0 finished
Feb  2 12:51:30 np0005605476 lvm[257617]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:51:30 np0005605476 lvm[257617]: VG ceph_vg2 finished
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:30 np0005605476 recursing_ishizaka[257518]: {}
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.334 239853 DEBUG nova.network.neutron [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updated VIF entry in instance network info cache for port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.335 239853 DEBUG nova.network.neutron [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updating instance_info_cache with network_info: [{"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:30 np0005605476 systemd[1]: libpod-a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a.scope: Deactivated successfully.
Feb  2 12:51:30 np0005605476 systemd[1]: libpod-a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a.scope: Consumed 1.059s CPU time.
Feb  2 12:51:30 np0005605476 podman[257466]: 2026-02-02 17:51:30.342807241 +0000 UTC m=+0.905247376 container died a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.352 239853 DEBUG oslo_concurrency.lockutils [req-0d5b5ee4-4b4c-4753-93b3-3b69d3b49cc3 req-ca59a140-4043-4565-b2c0-08af206546c9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-2d909269-9b7a-4d8c-b385-067b624e50bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:30 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5ed872948e6f6cae235befd85564e187f0fed8c330303d1cd2f6aea437345b09-merged.mount: Deactivated successfully.
Feb  2 12:51:30 np0005605476 podman[257466]: 2026-02-02 17:51:30.37945867 +0000 UTC m=+0.941898785 container remove a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:51:30 np0005605476 systemd[1]: libpod-conmon-a3d55b635e7955dde4d58dffa8a222ce1b37dd519a5fd7420c94d6aaeefa088a.scope: Deactivated successfully.
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:51:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.745 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Successfully updated port: 7f07651d-c620-4f85-b534-2f5cc3d866d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.772 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.772 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquired lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.772 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.834 239853 DEBUG nova.compute.manager [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.834 239853 DEBUG nova.compute.manager [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing instance network info cache due to event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.834 239853 DEBUG oslo_concurrency.lockutils [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:30 np0005605476 nova_compute[239846]: 2026-02-02 17:51:30.902 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:51:31 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:31.046 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:31 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:31 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:51:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.6 MiB/s rd, 7.3 MiB/s wr, 262 op/s
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.005 239853 DEBUG nova.network.neutron [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating instance_info_cache with network_info: [{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.028 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Releasing lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.028 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Instance network_info: |[{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.029 239853 DEBUG oslo_concurrency.lockutils [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.029 239853 DEBUG nova.network.neutron [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.035 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Start _get_guest_xml network_info=[{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T17:51:19Z,direct_url=<?>,disk_format='raw',id=9440fdc0-af14-4205-993a-98d6bf0736d2,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1449776055',owner='e9c44462f87f421099e0b0d1376904c4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T17:51:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '9440fdc0-af14-4205-993a-98d6bf0736d2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.041 239853 WARNING nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.046 239853 DEBUG nova.virt.libvirt.host [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.048 239853 DEBUG nova.virt.libvirt.host [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.067 239853 DEBUG nova.virt.libvirt.host [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.067 239853 DEBUG nova.virt.libvirt.host [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.068 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.068 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T17:51:19Z,direct_url=<?>,disk_format='raw',id=9440fdc0-af14-4205-993a-98d6bf0736d2,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1449776055',owner='e9c44462f87f421099e0b0d1376904c4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T17:51:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.072 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.072 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.072 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.073 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.073 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.074 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.074 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.075 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.075 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.076 239853 DEBUG nova.virt.hardware [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.080 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092031154' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.626 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.645 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:32 np0005605476 nova_compute[239846]: 2026-02-02 17:51:32.648 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648642110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.147 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.149 239853 DEBUG nova.virt.libvirt.vif [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-27035982',display_name='tempest-TestStampPattern-server-27035982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-27035982',id=13,image_ref='9440fdc0-af14-4205-993a-98d6bf0736d2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-s3gt3fa5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c29c7ea2-29c6-40eb-a75b-289e533ecc64',image_min_disk='1',image_min_ram='0',image_owner_id='e9c44462f87f421099e0b0d1376904c4',image_owner_project_name='tempest-TestStampPattern-468537565',image_owner_user_name='tempest-TestStampPattern-468537565-project-member',image_user_id='35a3cbbc2e32427f9356703501969892',network_allocated='True',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:28Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=e834c41a-ab1b-421b-8fbc-afcb2d642a3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.149 239853 DEBUG nova.network.os_vif_util [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.150 239853 DEBUG nova.network.os_vif_util [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.151 239853 DEBUG nova.objects.instance [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.8 MiB/s wr, 242 op/s
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.478 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <uuid>e834c41a-ab1b-421b-8fbc-afcb2d642a3c</uuid>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <name>instance-0000000d</name>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestStampPattern-server-27035982</nova:name>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:51:32</nova:creationTime>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:user uuid="35a3cbbc2e32427f9356703501969892">tempest-TestStampPattern-468537565-project-member</nova:user>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:project uuid="e9c44462f87f421099e0b0d1376904c4">tempest-TestStampPattern-468537565</nova:project>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="9440fdc0-af14-4205-993a-98d6bf0736d2"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <nova:port uuid="7f07651d-c620-4f85-b534-2f5cc3d866d5">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="serial">e834c41a-ab1b-421b-8fbc-afcb2d642a3c</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="uuid">e834c41a-ab1b-421b-8fbc-afcb2d642a3c</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:97:50:54"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <target dev="tap7f07651d-c6"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/console.log" append="off"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <input type="keyboard" bus="usb"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:51:33 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:51:33 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:51:33 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:51:33 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.478 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Preparing to wait for external event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.478 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.479 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.479 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.479 239853 DEBUG nova.virt.libvirt.vif [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-27035982',display_name='tempest-TestStampPattern-server-27035982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-27035982',id=13,image_ref='9440fdc0-af14-4205-993a-98d6bf0736d2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-s3gt3fa5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c29c7ea2-29c6-40eb-a75b-289e533ecc64',image_min_disk='1',image_min_ram='0',image_owner_id='e9c44462f87f421099e0b0d1376904c4',image_owner_project_name='tempest-TestStampPattern-468537565',image_owner_user_name='tempest-TestStampPattern-468537565-project-member',image_user_id='35a3cbbc2e32427f9356703501969892',network_allocated='True',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:28Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=e834c41a-ab1b-421b-8fbc-afcb2d642a3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.479 239853 DEBUG nova.network.os_vif_util [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.480 239853 DEBUG nova.network.os_vif_util [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.480 239853 DEBUG os_vif [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.481 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.481 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.481 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.485 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.485 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f07651d-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.486 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f07651d-c6, col_values=(('external_ids', {'iface-id': '7f07651d-c620-4f85-b534-2f5cc3d866d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:50:54', 'vm-uuid': 'e834c41a-ab1b-421b-8fbc-afcb2d642a3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.487 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:33 np0005605476 NetworkManager[49022]: <info>  [1770054693.4889] manager: (tap7f07651d-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.489 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.495 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.496 239853 INFO os_vif [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6')#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.549 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.549 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.549 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No VIF found with MAC fa:16:3e:97:50:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.550 239853 INFO nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Using config drive#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.570 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.857 239853 DEBUG nova.network.neutron [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updated VIF entry in instance network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.857 239853 DEBUG nova.network.neutron [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating instance_info_cache with network_info: [{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.871 239853 DEBUG oslo_concurrency.lockutils [req-4eb8d9d6-51f2-4920-9b42-e1721998f526 req-d7a5490e-5684-4e44-a23b-3311b07cce42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.917 239853 INFO nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Creating config drive at /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config#033[00m
Feb  2 12:51:33 np0005605476 nova_compute[239846]: 2026-02-02 17:51:33.922 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfbkubai_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.051 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfbkubai_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.075 239853 DEBUG nova.storage.rbd_utils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] rbd image e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.079 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.219 239853 DEBUG oslo_concurrency.processutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config e834c41a-ab1b-421b-8fbc-afcb2d642a3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.220 239853 INFO nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Deleting local config drive /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c/disk.config because it was imported into RBD.#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:34 np0005605476 kernel: tap7f07651d-c6: entered promiscuous mode
Feb  2 12:51:34 np0005605476 NetworkManager[49022]: <info>  [1770054694.2764] manager: (tap7f07651d-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Feb  2 12:51:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:34Z|00126|binding|INFO|Claiming lport 7f07651d-c620-4f85-b534-2f5cc3d866d5 for this chassis.
Feb  2 12:51:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:34Z|00127|binding|INFO|7f07651d-c620-4f85-b534-2f5cc3d866d5: Claiming fa:16:3e:97:50:54 10.100.0.6
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.278 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.290 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:50:54 10.100.0.6'], port_security=['fa:16:3e:97:50:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e834c41a-ab1b-421b-8fbc-afcb2d642a3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9c44462f87f421099e0b0d1376904c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c63f7b3b-d1b7-480e-bc0f-69ad7c8d6195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e81d0e0c-73b2-43ee-93af-f299a40e5ded, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=7f07651d-c620-4f85-b534-2f5cc3d866d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:51:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:34Z|00128|binding|INFO|Setting lport 7f07651d-c620-4f85-b534-2f5cc3d866d5 ovn-installed in OVS
Feb  2 12:51:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:34Z|00129|binding|INFO|Setting lport 7f07651d-c620-4f85-b534-2f5cc3d866d5 up in Southbound
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.293 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 7f07651d-c620-4f85-b534-2f5cc3d866d5 in datapath 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 bound to our chassis#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.294 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.296 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.300 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.309 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1344b849-2f2a-44ae-9cbc-f944082ae67f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 systemd-udevd[257794]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:51:34 np0005605476 NetworkManager[49022]: <info>  [1770054694.3295] device (tap7f07651d-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:51:34 np0005605476 NetworkManager[49022]: <info>  [1770054694.3305] device (tap7f07651d-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:51:34 np0005605476 systemd-machined[208080]: New machine qemu-13-instance-0000000d.
Feb  2 12:51:34 np0005605476 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.347 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[53b345be-c0a4-462c-85c9-460ab6be8f22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.351 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[8b41dff2-3f78-4a05-894f-2c08d969d812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.378 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7714d1-47fe-4fea-8b78-d69ccbf6c89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.391 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[43b92420-2a4b-432d-8511-1875562938c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27d3f0a2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:1e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389118, 'reachable_time': 20773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257804, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.408 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[df828fc9-22fc-417f-a9f5-5a8171d91fa5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap27d3f0a2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389126, 'tstamp': 389126}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257808, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap27d3f0a2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389129, 'tstamp': 389129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257808, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.410 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d3f0a2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.411 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.412 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.414 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d3f0a2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.415 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.415 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27d3f0a2-70, col_values=(('external_ids', {'iface-id': 'feaa395a-f5d1-49f8-90b4-f45ef83f72dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:34.415 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.477 239853 DEBUG nova.compute.manager [req-18e57ed5-0161-46c5-96a2-c89e83fbafcb req-4e239d0d-6201-4f07-b0ff-63430244c6b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.478 239853 DEBUG oslo_concurrency.lockutils [req-18e57ed5-0161-46c5-96a2-c89e83fbafcb req-4e239d0d-6201-4f07-b0ff-63430244c6b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.479 239853 DEBUG oslo_concurrency.lockutils [req-18e57ed5-0161-46c5-96a2-c89e83fbafcb req-4e239d0d-6201-4f07-b0ff-63430244c6b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.479 239853 DEBUG oslo_concurrency.lockutils [req-18e57ed5-0161-46c5-96a2-c89e83fbafcb req-4e239d0d-6201-4f07-b0ff-63430244c6b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.480 239853 DEBUG nova.compute.manager [req-18e57ed5-0161-46c5-96a2-c89e83fbafcb req-4e239d0d-6201-4f07-b0ff-63430244c6b8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Processing event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:51:34 np0005605476 nova_compute[239846]: 2026-02-02 17:51:34.881 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.053 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054695.053097, e834c41a-ab1b-421b-8fbc-afcb2d642a3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.054 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] VM Started (Lifecycle Event)#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.056 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.059 239853 DEBUG nova.virt.libvirt.driver [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.062 239853 INFO nova.virt.libvirt.driver [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Instance spawned successfully.#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.062 239853 INFO nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Took 6.15 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.063 239853 DEBUG nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.073 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.078 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.109 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.110 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054695.053989, e834c41a-ab1b-421b-8fbc-afcb2d642a3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.110 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:51:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.122 239853 INFO nova.compute.manager [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Took 7.40 seconds to build instance.#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.127 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.130 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054695.0588355, e834c41a-ab1b-421b-8fbc-afcb2d642a3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.130 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.137 239853 DEBUG oslo_concurrency.lockutils [None req-12a80279-fb46-4a93-b4db-4a44db2762d3 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.145 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.148 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.236 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.257 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.258 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.258 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.430 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.431 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.431 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:51:35 np0005605476 nova_compute[239846]: 2026-02-02 17:51:35.432 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 137 op/s
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.578 239853 DEBUG nova.compute.manager [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.579 239853 DEBUG oslo_concurrency.lockutils [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.580 239853 DEBUG oslo_concurrency.lockutils [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.580 239853 DEBUG oslo_concurrency.lockutils [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.580 239853 DEBUG nova.compute.manager [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] No waiting events found dispatching network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:51:36 np0005605476 nova_compute[239846]: 2026-02-02 17:51:36.581 239853 WARNING nova.compute.manager [req-d29cbe40-4126-44f5-b9bb-417c04a4825c req-c57e1292-dfe4-4909-a7aa-afe459e904a5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received unexpected event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:51:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:51:36
Feb  2 12:51:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:51:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:51:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb  2 12:51:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.010 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.012 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.029 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.070 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.095 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.095 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.096 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.110 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.111 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.118 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.118 239853 INFO nova.compute.claims [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.240 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.267 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.283 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 137 op/s
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:51:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:51:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953760135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.796 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.801 239853 DEBUG nova.compute.provider_tree [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.821 239853 DEBUG nova.scheduler.client.report [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.845 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.846 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.848 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.849 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.849 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.849 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:37 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:37Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:fc:76 10.100.0.13
Feb  2 12:51:37 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:37Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:fc:76 10.100.0.13
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.916 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.916 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.942 239853 INFO nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:51:37 np0005605476 nova_compute[239846]: 2026-02-02 17:51:37.965 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.056 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.058 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.059 239853 INFO nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Creating image(s)#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.087 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.115 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.145 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.151 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.221 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.222 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.222 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.223 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.250 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.253 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 58f005d9-a28a-4d78-894c-45ac84602542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.270 239853 DEBUG nova.policy [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '275a756bbf8748d6adfeb979b49b1846', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '579907b0a88b4f8b8769e75035c71cb0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:51:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438443063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.535 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.549 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.556 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 58f005d9-a28a-4d78-894c-45ac84602542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.625 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] resizing rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.684 239853 DEBUG nova.compute.manager [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.684 239853 DEBUG nova.compute.manager [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing instance network info cache due to event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.684 239853 DEBUG oslo_concurrency.lockutils [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.685 239853 DEBUG oslo_concurrency.lockutils [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.685 239853 DEBUG nova.network.neutron [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.732 239853 DEBUG nova.objects.instance [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'migration_context' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.754 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.754 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Ensure instance console log exists: /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.755 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.755 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.755 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.760 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.760 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.765 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.765 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.770 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.770 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.770 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.951 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.952 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=59.92131426092237GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.952 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:38 np0005605476 nova_compute[239846]: 2026-02-02 17:51:38.953 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.024 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance c29c7ea2-29c6-40eb-a75b-289e533ecc64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.025 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 2d909269-9b7a-4d8c-b385-067b624e50bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.025 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.025 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 58f005d9-a28a-4d78-894c-45ac84602542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.025 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.025 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.110 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.361 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Successfully created port: 885b4958-c65e-403a-a99d-2c07671482a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:51:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.4 MiB/s wr, 259 op/s
Feb  2 12:51:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324849078' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.804 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.809 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.830 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.865 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.866 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:39 np0005605476 nova_compute[239846]: 2026-02-02 17:51:39.883 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.121643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700121742, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1332, "num_deletes": 264, "total_data_size": 1770689, "memory_usage": 1802552, "flush_reason": "Manual Compaction"}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700139300, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1727091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23966, "largest_seqno": 25297, "table_properties": {"data_size": 1720669, "index_size": 3561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14074, "raw_average_key_size": 20, "raw_value_size": 1707390, "raw_average_value_size": 2435, "num_data_blocks": 158, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054618, "oldest_key_time": 1770054618, "file_creation_time": 1770054700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17705 microseconds, and 5116 cpu microseconds.
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.139362) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1727091 bytes OK
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.139409) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.141661) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.141686) EVENT_LOG_v1 {"time_micros": 1770054700141678, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.141713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1764473, prev total WAL file size 1764473, number of live WAL files 2.
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.142657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1686KB)], [53(8930KB)]
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700142704, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10871963, "oldest_snapshot_seqno": -1}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5410 keys, 10770291 bytes, temperature: kUnknown
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700188103, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10770291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10727368, "index_size": 28285, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 134349, "raw_average_key_size": 24, "raw_value_size": 10623273, "raw_average_value_size": 1963, "num_data_blocks": 1167, "num_entries": 5410, "num_filter_entries": 5410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.188360) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10770291 bytes
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.193215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.0 rd, 236.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.5) write-amplify(6.2) OK, records in: 5952, records dropped: 542 output_compression: NoCompression
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.193233) EVENT_LOG_v1 {"time_micros": 1770054700193223, "job": 28, "event": "compaction_finished", "compaction_time_micros": 45485, "compaction_time_cpu_micros": 16652, "output_level": 6, "num_output_files": 1, "total_output_size": 10770291, "num_input_records": 5952, "num_output_records": 5410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700193448, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054700194044, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.142597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.194101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.194105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.194107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.194109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:51:40.194110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.234 239853 DEBUG nova.network.neutron [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updated VIF entry in instance network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.235 239853 DEBUG nova.network.neutron [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating instance_info_cache with network_info: [{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.257 239853 DEBUG oslo_concurrency.lockutils [req-bfe45655-ecf5-482a-8965-6c24d6710723 req-2227324b-2a24-4b30-a853-1c96c1194730 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.487 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Successfully updated port: 885b4958-c65e-403a-a99d-2c07671482a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.507 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.507 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquired lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.508 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.558 239853 DEBUG nova.compute.manager [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-changed-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.558 239853 DEBUG nova.compute.manager [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Refreshing instance network info cache due to event network-changed-885b4958-c65e-403a-a99d-2c07671482a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.559 239853 DEBUG oslo_concurrency.lockutils [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:40 np0005605476 nova_compute[239846]: 2026-02-02 17:51:40.634 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.472 239853 DEBUG nova.network.neutron [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updating instance_info_cache with network_info: [{"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.2 MiB/s wr, 271 op/s
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.490 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Releasing lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.491 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Instance network_info: |[{"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.491 239853 DEBUG oslo_concurrency.lockutils [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.491 239853 DEBUG nova.network.neutron [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Refreshing network info cache for port 885b4958-c65e-403a-a99d-2c07671482a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.495 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Start _get_guest_xml network_info=[{"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.499 239853 WARNING nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.503 239853 DEBUG nova.virt.libvirt.host [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.504 239853 DEBUG nova.virt.libvirt.host [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.510 239853 DEBUG nova.virt.libvirt.host [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.510 239853 DEBUG nova.virt.libvirt.host [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.511 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.511 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.511 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.511 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.512 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.512 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.512 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.512 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.513 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.513 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.513 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.513 239853 DEBUG nova.virt.hardware [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.517 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.866 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.867 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.867 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:51:41 np0005605476 nova_compute[239846]: 2026-02-02 17:51:41.868 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:51:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2686433790' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.077 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.098 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.101 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.513 239853 DEBUG nova.network.neutron [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updated VIF entry in instance network info cache for port 885b4958-c65e-403a-a99d-2c07671482a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.514 239853 DEBUG nova.network.neutron [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updating instance_info_cache with network_info: [{"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.531 239853 DEBUG oslo_concurrency.lockutils [req-325cbd1c-c94f-4f9c-89b9-f33a8b0740d5 req-4c80b6a8-7fb4-42be-873c-24304fdf45a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678476011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.674 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.675 239853 DEBUG nova.virt.libvirt.vif [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1441261',display_name='tempest-TestEncryptedCinderVolumes-server-1441261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1441261',id=14,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCss6AgPa7bURFSmlFx5e9TW4W97x0hTlX+lOzoWItgtIAQg82IfDa9TH2uKSgbrmUCoaez3KNng6Sw+wl5jyO6WgfkSYYkHE+yy0WX7Vs/I+XwoKTiVHjgf2XDq1ZSx4g==',key_name='tempest-keypair-1629260160',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='579907b0a88b4f8b8769e75035c71cb0',ramdisk_id='',reservation_id='r-8ud6l41s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1546153962',owner_user_name='tempest-TestEncryptedCinderVolumes-1546153962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='275a756bbf8748d6adfeb979b49b1846',uuid=58f005d9-a28a-4d78-894c-45ac84602542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.675 239853 DEBUG nova.network.os_vif_util [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converting VIF {"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.676 239853 DEBUG nova.network.os_vif_util [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.677 239853 DEBUG nova.objects.instance [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.699 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <uuid>58f005d9-a28a-4d78-894c-45ac84602542</uuid>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <name>instance-0000000e</name>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1441261</nova:name>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:51:41</nova:creationTime>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:user uuid="275a756bbf8748d6adfeb979b49b1846">tempest-TestEncryptedCinderVolumes-1546153962-project-member</nova:user>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:project uuid="579907b0a88b4f8b8769e75035c71cb0">tempest-TestEncryptedCinderVolumes-1546153962</nova:project>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <nova:port uuid="885b4958-c65e-403a-a99d-2c07671482a7">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="serial">58f005d9-a28a-4d78-894c-45ac84602542</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="uuid">58f005d9-a28a-4d78-894c-45ac84602542</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/58f005d9-a28a-4d78-894c-45ac84602542_disk">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/58f005d9-a28a-4d78-894c-45ac84602542_disk.config">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:be:ad:69"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <target dev="tap885b4958-c6"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/console.log" append="off"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:51:42 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:51:42 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:51:42 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:51:42 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.700 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Preparing to wait for external event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.700 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.700 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.701 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.701 239853 DEBUG nova.virt.libvirt.vif [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:51:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1441261',display_name='tempest-TestEncryptedCinderVolumes-server-1441261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1441261',id=14,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCss6AgPa7bURFSmlFx5e9TW4W97x0hTlX+lOzoWItgtIAQg82IfDa9TH2uKSgbrmUCoaez3KNng6Sw+wl5jyO6WgfkSYYkHE+yy0WX7Vs/I+XwoKTiVHjgf2XDq1ZSx4g==',key_name='tempest-keypair-1629260160',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='579907b0a88b4f8b8769e75035c71cb0',ramdisk_id='',reservation_id='r-8ud6l41s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1546153962',owner_user_name='tempest-TestEncryptedCinderVolumes-1546153962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:51:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='275a756bbf8748d6adfeb979b49b1846',uuid=58f005d9-a28a-4d78-894c-45ac84602542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.702 239853 DEBUG nova.network.os_vif_util [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converting VIF {"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.702 239853 DEBUG nova.network.os_vif_util [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.703 239853 DEBUG os_vif [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.703 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.704 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.704 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.706 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.706 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885b4958-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.707 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap885b4958-c6, col_values=(('external_ids', {'iface-id': '885b4958-c65e-403a-a99d-2c07671482a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:ad:69', 'vm-uuid': '58f005d9-a28a-4d78-894c-45ac84602542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:42 np0005605476 NetworkManager[49022]: <info>  [1770054702.7095] manager: (tap885b4958-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.708 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.710 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.714 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.715 239853 INFO os_vif [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6')#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.776 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.777 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.777 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No VIF found with MAC fa:16:3e:be:ad:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.777 239853 INFO nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Using config drive#033[00m
Feb  2 12:51:42 np0005605476 nova_compute[239846]: 2026-02-02 17:51:42.799 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.214 239853 INFO nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Creating config drive at /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.219 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwy2zxat execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.351 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwy2zxat" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.372 239853 DEBUG nova.storage.rbd_utils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] rbd image 58f005d9-a28a-4d78-894c-45ac84602542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.375 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config 58f005d9-a28a-4d78-894c-45ac84602542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.471 239853 DEBUG oslo_concurrency.processutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config 58f005d9-a28a-4d78-894c-45ac84602542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.472 239853 INFO nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Deleting local config drive /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542/disk.config because it was imported into RBD.#033[00m
Feb  2 12:51:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 223 op/s
Feb  2 12:51:43 np0005605476 kernel: tap885b4958-c6: entered promiscuous mode
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.4957] manager: (tap885b4958-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.497 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:43 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:43Z|00130|binding|INFO|Claiming lport 885b4958-c65e-403a-a99d-2c07671482a7 for this chassis.
Feb  2 12:51:43 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:43Z|00131|binding|INFO|885b4958-c65e-403a-a99d-2c07671482a7: Claiming fa:16:3e:be:ad:69 10.100.0.10
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.505 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:ad:69 10.100.0.10'], port_security=['fa:16:3e:be:ad:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '58f005d9-a28a-4d78-894c-45ac84602542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '579907b0a88b4f8b8769e75035c71cb0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a63db91b-feb8-4308-baa2-1840080b75f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5eea345e-6e1c-46cd-8a99-ce93e73e6294, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=885b4958-c65e-403a-a99d-2c07671482a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.506 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:43 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:43Z|00132|binding|INFO|Setting lport 885b4958-c65e-403a-a99d-2c07671482a7 ovn-installed in OVS
Feb  2 12:51:43 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:43Z|00133|binding|INFO|Setting lport 885b4958-c65e-403a-a99d-2c07671482a7 up in Southbound
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.508 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.515 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 885b4958-c65e-403a-a99d-2c07671482a7 in datapath 4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 bound to our chassis#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.518 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d65fc0d-a384-439a-aa0f-b1fdb2ce9802#033[00m
Feb  2 12:51:43 np0005605476 systemd-udevd[258219]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.527 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[60d81bba-ece3-4a5c-8789-5b2c084a0817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.528 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d65fc0d-a1 in ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:51:43 np0005605476 systemd-machined[208080]: New machine qemu-14-instance-0000000e.
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.533 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d65fc0d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.533 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[781e4f5a-2460-4538-8bb5-c8b4ab25fc16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.5387] device (tap885b4958-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.5394] device (tap885b4958-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.539 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d61dcc-d518-46e1-a175-6fa05d4d4d50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.550 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[ff166394-6f21-4dd5-b766-f5b42bd05c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.562 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[98f5d28d-bd1b-44b8-83d1-5eadf8aa4665]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.590 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b802f842-d5a7-4329-841b-7b679d299274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.595 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c441da94-dc0f-4f1a-ad71-d7e21c4859a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.5966] manager: (tap4d65fc0d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.619 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f4b582-8942-4fa2-b2f1-0e5602f9592f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.622 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0c095e-1ef5-4c25-8d1e-31bbb3603ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.6377] device (tap4d65fc0d-a0): carrier: link connected
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.641 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[867e6b9e-cf2b-4636-8610-573fb0d924a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.654 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[02fb470d-702e-4960-82c5-579e13539be1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d65fc0d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:26:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394496, 'reachable_time': 18918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258252, 'error': None, 'target': 'ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.666 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0c518ca6-2492-4e77-a896-5332798dd2cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:26da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394496, 'tstamp': 394496}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258253, 'error': None, 'target': 'ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.675 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[db7139b8-96c5-4161-a179-acf98398e58c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d65fc0d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:26:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394496, 'reachable_time': 18918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258254, 'error': None, 'target': 'ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.697 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc3020-518d-4370-a38f-4409343f918b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.705 239853 DEBUG nova.compute.manager [req-a8998caa-eb58-4289-afec-6a879fcd7f7d req-edce77ef-3a00-4a57-b8e1-4215280104e8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.705 239853 DEBUG oslo_concurrency.lockutils [req-a8998caa-eb58-4289-afec-6a879fcd7f7d req-edce77ef-3a00-4a57-b8e1-4215280104e8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.705 239853 DEBUG oslo_concurrency.lockutils [req-a8998caa-eb58-4289-afec-6a879fcd7f7d req-edce77ef-3a00-4a57-b8e1-4215280104e8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.705 239853 DEBUG oslo_concurrency.lockutils [req-a8998caa-eb58-4289-afec-6a879fcd7f7d req-edce77ef-3a00-4a57-b8e1-4215280104e8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.706 239853 DEBUG nova.compute.manager [req-a8998caa-eb58-4289-afec-6a879fcd7f7d req-edce77ef-3a00-4a57-b8e1-4215280104e8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Processing event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.732 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1aca35eb-8e5b-48e4-9ffc-3852ac8cfef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.733 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d65fc0d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.733 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.734 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d65fc0d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:43 np0005605476 kernel: tap4d65fc0d-a0: entered promiscuous mode
Feb  2 12:51:43 np0005605476 NetworkManager[49022]: <info>  [1770054703.7363] manager: (tap4d65fc0d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.740 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d65fc0d-a0, col_values=(('external_ids', {'iface-id': '7f175d8b-91cd-487a-92aa-87f7179c4aec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:43 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:43Z|00134|binding|INFO|Releasing lport 7f175d8b-91cd-487a-92aa-87f7179c4aec from this chassis (sb_readonly=0)
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.744 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d65fc0d-a384-439a-aa0f-b1fdb2ce9802.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d65fc0d-a384-439a-aa0f-b1fdb2ce9802.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.748 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:43 np0005605476 nova_compute[239846]: 2026-02-02 17:51:43.752 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.752 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e9427dce-2cb2-4c29-b2cd-21e95808421f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.753 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/4d65fc0d-a384-439a-aa0f-b1fdb2ce9802.pid.haproxy
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 4d65fc0d-a384-439a-aa0f-b1fdb2ce9802
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:51:43 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:43.754 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'env', 'PROCESS_TAG=haproxy-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d65fc0d-a384-439a-aa0f-b1fdb2ce9802.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:51:44 np0005605476 podman[258294]: 2026-02-02 17:51:44.091024363 +0000 UTC m=+0.059151681 container create 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 12:51:44 np0005605476 systemd[1]: Started libpod-conmon-4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47.scope.
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.138 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054704.1384254, 58f005d9-a28a-4d78-894c-45ac84602542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.139 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] VM Started (Lifecycle Event)#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.141 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:51:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.145 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:51:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cc05fcd9d47053287ba574162040e716aae4bb738fc9b35becf65279d3ad9b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.148 239853 INFO nova.virt.libvirt.driver [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Instance spawned successfully.#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.148 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:51:44 np0005605476 podman[258294]: 2026-02-02 17:51:44.054868358 +0000 UTC m=+0.022995686 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:51:44 np0005605476 podman[258294]: 2026-02-02 17:51:44.157009695 +0000 UTC m=+0.125137003 container init 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb  2 12:51:44 np0005605476 podman[258294]: 2026-02-02 17:51:44.161704907 +0000 UTC m=+0.129832235 container start 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.164 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.170 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.174 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.175 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.175 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.176 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.176 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.176 239853 DEBUG nova.virt.libvirt.driver [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:51:44 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [NOTICE]   (258347) : New worker (258349) forked
Feb  2 12:51:44 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [NOTICE]   (258347) : Loading success.
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.189 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.189 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054704.1384928, 58f005d9-a28a-4d78-894c-45ac84602542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.189 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.207 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.209 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054704.145814, 58f005d9-a28a-4d78-894c-45ac84602542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.210 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.230 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.233 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.248 239853 INFO nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Took 6.19 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.249 239853 DEBUG nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.258 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.324 239853 INFO nova.compute.manager [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Took 7.23 seconds to build instance.#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.338 239853 DEBUG oslo_concurrency.lockutils [None req-b5ba826f-85f2-43b8-a729-cfeb8d9a54c7 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.847 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.847 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.847 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.848 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.848 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.849 239853 INFO nova.compute.manager [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Terminating instance#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.850 239853 DEBUG nova.compute.manager [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:51:44 np0005605476 nova_compute[239846]: 2026-02-02 17:51:44.886 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:45 np0005605476 kernel: tap4aaf8ce7-0b (unregistering): left promiscuous mode
Feb  2 12:51:45 np0005605476 NetworkManager[49022]: <info>  [1770054705.1563] device (tap4aaf8ce7-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:51:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:45Z|00135|binding|INFO|Releasing lport 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 from this chassis (sb_readonly=0)
Feb  2 12:51:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:45Z|00136|binding|INFO|Setting lport 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 down in Southbound
Feb  2 12:51:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:45Z|00137|binding|INFO|Removing iface tap4aaf8ce7-0b ovn-installed in OVS
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.169 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.171 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.172 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:fc:76 10.100.0.13'], port_security=['fa:16:3e:11:fc:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2d909269-9b7a-4d8c-b385-067b624e50bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8267f865-a42d-418a-8f76-cf395fe72304', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07fcb0b617c84dccb0074a9f1c41229e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47bc4c06-2c5a-4139-a520-1f888fa04212', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29d8f34c-033a-4910-b36d-40dce5cc751d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=4aaf8ce7-0bce-41b5-bc64-ea40a533f786) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.173 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 4aaf8ce7-0bce-41b5-bc64-ea40a533f786 in datapath 8267f865-a42d-418a-8f76-cf395fe72304 unbound from our chassis#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.175 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8267f865-a42d-418a-8f76-cf395fe72304, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.175 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2194cca2-df60-4b14-9639-dda2ed7e9534]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.177 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304 namespace which is not needed anymore#033[00m
Feb  2 12:51:45 np0005605476 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb  2 12:51:45 np0005605476 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 12.334s CPU time.
Feb  2 12:51:45 np0005605476 systemd-machined[208080]: Machine qemu-12-instance-0000000c terminated.
Feb  2 12:51:45 np0005605476 podman[258360]: 2026-02-02 17:51:45.230755379 +0000 UTC m=+0.049448059 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.280 239853 INFO nova.virt.libvirt.driver [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Instance destroyed successfully.#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.280 239853 DEBUG nova.objects.instance [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lazy-loading 'resources' on Instance uuid 2d909269-9b7a-4d8c-b385-067b624e50bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.295 239853 DEBUG nova.virt.libvirt.vif [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:51:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-391420875',display_name='tempest-instance-391420875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-391420875',id=12,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHTQbKHoOKXgtPCda2P+xfduDnfHo0kDKiWIKzuAWBql1fwUTGWxPvrKc6SHeOWoa2o4Vo/30fD792pb1rUBQr6ZcrY2rdJ0d62PxAhx3ZuIvZX6lb9S0CpqVEoa7Ce+A==',key_name='tempest-keypair-1368486445',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07fcb0b617c84dccb0074a9f1c41229e',ramdisk_id='',reservation_id='r-sme9lh0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-248885617',owner_user_name='tempest-VolumesBackupsTest-248885617-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91a3ca2bdb8d4c1fbfab4f38d262f4e0',uuid=2d909269-9b7a-4d8c-b385-067b624e50bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.295 239853 DEBUG nova.network.os_vif_util [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converting VIF {"id": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "address": "fa:16:3e:11:fc:76", "network": {"id": "8267f865-a42d-418a-8f76-cf395fe72304", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-659863619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07fcb0b617c84dccb0074a9f1c41229e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aaf8ce7-0b", "ovs_interfaceid": "4aaf8ce7-0bce-41b5-bc64-ea40a533f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.296 239853 DEBUG nova.network.os_vif_util [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.296 239853 DEBUG os_vif [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.298 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.298 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aaf8ce7-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [NOTICE]   (256769) : haproxy version is 2.8.14-c23fe91
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [NOTICE]   (256769) : path to executable is /usr/sbin/haproxy
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [WARNING]  (256769) : Exiting Master process...
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [WARNING]  (256769) : Exiting Master process...
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [ALERT]    (256769) : Current worker (256771) exited with code 143 (Terminated)
Feb  2 12:51:45 np0005605476 neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304[256765]: [WARNING]  (256769) : All workers exited. Exiting... (0)
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.330 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:51:45 np0005605476 systemd[1]: libpod-655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9.scope: Deactivated successfully.
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.335 239853 INFO os_vif [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:fc:76,bridge_name='br-int',has_traffic_filtering=True,id=4aaf8ce7-0bce-41b5-bc64-ea40a533f786,network=Network(8267f865-a42d-418a-8f76-cf395fe72304),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4aaf8ce7-0b')#033[00m
Feb  2 12:51:45 np0005605476 podman[258397]: 2026-02-02 17:51:45.342420643 +0000 UTC m=+0.087843357 container died 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:51:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9-userdata-shm.mount: Deactivated successfully.
Feb  2 12:51:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5afb5dab47848c532eb605f7d2910eb386f1ebf020874af4f94aabe195d474bf-merged.mount: Deactivated successfully.
Feb  2 12:51:45 np0005605476 podman[258397]: 2026-02-02 17:51:45.385466391 +0000 UTC m=+0.130889105 container cleanup 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 12:51:45 np0005605476 systemd[1]: libpod-conmon-655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9.scope: Deactivated successfully.
Feb  2 12:51:45 np0005605476 podman[258452]: 2026-02-02 17:51:45.448649664 +0000 UTC m=+0.043928264 container remove 655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.452 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[53da2da6-d677-4e86-b134-6772fb370151]: (4, ('Mon Feb  2 05:51:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304 (655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9)\n655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9\nMon Feb  2 05:51:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304 (655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9)\n655e884815c2549248a695afc6e224cfcce9c2676a157ef581fe7ae033de35c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.454 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc8a1bc-ce2f-42ed-a968-cd95d213758e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.456 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8267f865-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.457 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 kernel: tap8267f865-a0: left promiscuous mode
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.468 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.469 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f05bc5a-52f8-48cb-bb14-2d557023983b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 240 op/s
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.487 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[091923c1-426a-4529-be0f-9ae8b8b3edce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.488 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[873bbd15-e280-437f-9aac-624c3680cef6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.503 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[da078b63-0afb-4f59-9983-d55a70ec62dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392571, 'reachable_time': 35763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258466, 'error': None, 'target': 'ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 systemd[1]: run-netns-ovnmeta\x2d8267f865\x2da42d\x2d418a\x2d8f76\x2dcf395fe72304.mount: Deactivated successfully.
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.508 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8267f865-a42d-418a-8f76-cf395fe72304 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:51:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:45.508 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[54652d86-2278-47b6-a93f-757c149dd65f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.576 239853 DEBUG nova.compute.manager [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-unplugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.577 239853 DEBUG oslo_concurrency.lockutils [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.577 239853 DEBUG oslo_concurrency.lockutils [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.577 239853 DEBUG oslo_concurrency.lockutils [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.577 239853 DEBUG nova.compute.manager [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] No waiting events found dispatching network-vif-unplugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.578 239853 DEBUG nova.compute.manager [req-84f36dd9-940d-4b30-a1a4-3a672727c8e1 req-0db7aa14-969d-4222-908e-cd680a9c29ee e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-unplugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.662 239853 INFO nova.virt.libvirt.driver [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Deleting instance files /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc_del#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.662 239853 INFO nova.virt.libvirt.driver [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Deletion of /var/lib/nova/instances/2d909269-9b7a-4d8c-b385-067b624e50bc_del complete#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.712 239853 INFO nova.compute.manager [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.712 239853 DEBUG oslo.service.loopingcall [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.713 239853 DEBUG nova.compute.manager [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.713 239853 DEBUG nova.network.neutron [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.778 239853 DEBUG nova.compute.manager [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.779 239853 DEBUG oslo_concurrency.lockutils [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.779 239853 DEBUG oslo_concurrency.lockutils [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.779 239853 DEBUG oslo_concurrency.lockutils [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.779 239853 DEBUG nova.compute.manager [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] No waiting events found dispatching network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:51:45 np0005605476 nova_compute[239846]: 2026-02-02 17:51:45.779 239853 WARNING nova.compute.manager [req-cd784891-7c81-4140-9b8c-7693076d56d5 req-91879456-bd6e-4b44-ae4a-10612afaec62 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received unexpected event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.380 239853 DEBUG nova.network.neutron [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.397 239853 INFO nova.compute.manager [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Took 0.68 seconds to deallocate network for instance.#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.555 239853 INFO nova.compute.manager [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.640 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.640 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:46.641 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:46.643 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:51:46.645 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:46 np0005605476 nova_compute[239846]: 2026-02-02 17:51:46.739 239853 DEBUG oslo_concurrency.processutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:51:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:51:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041976912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.275 239853 DEBUG oslo_concurrency.processutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.281 239853 DEBUG nova.compute.provider_tree [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.303 239853 DEBUG nova.scheduler.client.report [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.323 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.344 239853 INFO nova.scheduler.client.report [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Deleted allocations for instance 2d909269-9b7a-4d8c-b385-067b624e50bc#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.412 239853 DEBUG oslo_concurrency.lockutils [None req-fc87d415-0f24-4e85-a15e-aea436a46c6c 91a3ca2bdb8d4c1fbfab4f38d262f4e0 07fcb0b617c84dccb0074a9f1c41229e - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 207 op/s
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001493926857756227 of space, bias 1.0, pg target 0.4481780573268681 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03449408169827316 of space, bias 1.0, pg target 10.348224509481946 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003475245689207776 of space, bias 1.0, pg target 0.1007821249870255 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241446568148206 of space, bias 1.0, pg target 0.413001950476298 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.588907416571536e-07 of space, bias 4.0, pg target 0.0011123132603222982 quantized to 16 (current 16)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:51:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Feb  2 12:51:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:47Z|00020|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Feb  2 12:51:47 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:47Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:97:50:54 10.100.0.6
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.661 239853 DEBUG nova.compute.manager [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.662 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.662 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.663 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "2d909269-9b7a-4d8c-b385-067b624e50bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.663 239853 DEBUG nova.compute.manager [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] No waiting events found dispatching network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.663 239853 WARNING nova.compute.manager [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received unexpected event network-vif-plugged-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.664 239853 DEBUG nova.compute.manager [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-changed-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.664 239853 DEBUG nova.compute.manager [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Refreshing instance network info cache due to event network-changed-885b4958-c65e-403a-a99d-2c07671482a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.664 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.665 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.665 239853 DEBUG nova.network.neutron [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Refreshing network info cache for port 885b4958-c65e-403a-a99d-2c07671482a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:51:47 np0005605476 nova_compute[239846]: 2026-02-02 17:51:47.847 239853 DEBUG nova.compute.manager [req-6171c1a0-6fc9-4fc5-9692-4bcc6eda6393 req-08e23326-63b2-4f87-997c-eeab45692dfd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Received event network-vif-deleted-4aaf8ce7-0bce-41b5-bc64-ea40a533f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:51:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:51:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874324114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:51:48 np0005605476 nova_compute[239846]: 2026-02-02 17:51:48.523 239853 DEBUG nova.network.neutron [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updated VIF entry in instance network info cache for port 885b4958-c65e-403a-a99d-2c07671482a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:51:48 np0005605476 nova_compute[239846]: 2026-02-02 17:51:48.524 239853 DEBUG nova.network.neutron [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updating instance_info_cache with network_info: [{"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:51:48 np0005605476 nova_compute[239846]: 2026-02-02 17:51:48.541 239853 DEBUG oslo_concurrency.lockutils [req-d09a3b21-e6fb-470f-88e6-d12a7a44a87f req-89cce8c3-d478-4aca-b464-3b37a5292527 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-58f005d9-a28a-4d78-894c-45ac84602542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:51:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb  2 12:51:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb  2 12:51:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb  2 12:51:48 np0005605476 podman[258490]: 2026-02-02 17:51:48.670215806 +0000 UTC m=+0.118049354 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 12:51:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 6.7 MiB/s rd, 2.6 MiB/s wr, 244 op/s
Feb  2 12:51:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Feb  2 12:51:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Feb  2 12:51:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Feb  2 12:51:49 np0005605476 nova_compute[239846]: 2026-02-02 17:51:49.890 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:50 np0005605476 nova_compute[239846]: 2026-02-02 17:51:50.328 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:51Z|00022|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Feb  2 12:51:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:51Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:97:50:54 10.100.0.6
Feb  2 12:51:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.9 MiB/s wr, 312 op/s
Feb  2 12:51:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Feb  2 12:51:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Feb  2 12:51:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Feb  2 12:51:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:52Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:50:54 10.100.0.6
Feb  2 12:51:52 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:52Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:50:54 10.100.0.6
Feb  2 12:51:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 9.5 MiB/s rd, 4.9 MiB/s wr, 382 op/s
Feb  2 12:51:54 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 12:51:54 np0005605476 nova_compute[239846]: 2026-02-02 17:51:54.890 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:51:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:55Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:ad:69 10.100.0.10
Feb  2 12:51:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:51:55Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:ad:69 10.100.0.10
Feb  2 12:51:55 np0005605476 nova_compute[239846]: 2026-02-02 17:51:55.330 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:51:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.6 MiB/s wr, 368 op/s
Feb  2 12:51:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Feb  2 12:51:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Feb  2 12:51:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Feb  2 12:51:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 753 KiB/s rd, 1.4 MiB/s wr, 101 op/s
Feb  2 12:51:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 552 KiB/s rd, 3.3 MiB/s wr, 143 op/s
Feb  2 12:51:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Feb  2 12:51:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Feb  2 12:51:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Feb  2 12:51:59 np0005605476 nova_compute[239846]: 2026-02-02 17:51:59.933 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Feb  2 12:52:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Feb  2 12:52:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Feb  2 12:52:00 np0005605476 nova_compute[239846]: 2026-02-02 17:52:00.278 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054705.276956, 2d909269-9b7a-4d8c-b385-067b624e50bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:52:00 np0005605476 nova_compute[239846]: 2026-02-02 17:52:00.278 239853 INFO nova.compute.manager [-] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:52:00 np0005605476 nova_compute[239846]: 2026-02-02 17:52:00.332 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:00 np0005605476 nova_compute[239846]: 2026-02-02 17:52:00.434 239853 DEBUG nova.compute.manager [None req-75ccc1cd-a4c7-42ed-82ed-9f6ec1cc37d7 - - - - - -] [instance: 2d909269-9b7a-4d8c-b385-067b624e50bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:52:01 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 12:52:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:52:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3226493659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:52:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 593 KiB/s rd, 4.0 MiB/s wr, 176 op/s
Feb  2 12:52:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 527 KiB/s rd, 3.5 MiB/s wr, 157 op/s
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.074 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.074 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.088 239853 DEBUG nova.objects.instance [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'flavor' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.133 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.361 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.361 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.362 239853 INFO nova.compute.manager [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Attaching volume 255429c2-5a82-4a67-9bda-beb812b364b7 to /dev/vdb#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.488 239853 DEBUG os_brick.utils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.491 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.501 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.501 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc1b977-73ad-4afb-a508-cb60a7a9a242]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.502 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.528 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.529 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8b0951-d2f1-404f-8b11-51a9505f5524]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.530 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.536 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.537 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[fca8bb94-728d-44b0-996c-4290caf49049]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.538 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fc909d-8101-4004-a0b7-6dc47e1f2106]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.538 239853 DEBUG oslo_concurrency.processutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.557 239853 DEBUG oslo_concurrency.processutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.560 239853 DEBUG os_brick.initiator.connectors.lightos [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.560 239853 DEBUG os_brick.initiator.connectors.lightos [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.561 239853 DEBUG os_brick.initiator.connectors.lightos [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.561 239853 DEBUG os_brick.utils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.562 239853 DEBUG nova.virt.block_device [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updating existing volume attachment record: 8d501798-58b8-4bee-9ee5-14245fc8b757 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:52:04 np0005605476 nova_compute[239846]: 2026-02-02 17:52:04.935 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334268200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334268200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:52:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935075228' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.334 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.437 239853 DEBUG os_brick.encryptors [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a33ca021-5324-4878-b6db-8d641bb03dba', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-255429c2-5a82-4a67-9bda-beb812b364b7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '255429c2-5a82-4a67-9bda-beb812b364b7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '58f005d9-a28a-4d78-894c-45ac84602542', 'attached_at': '', 'detached_at': '', 'volume_id': '255429c2-5a82-4a67-9bda-beb812b364b7', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.443 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.484 239853 DEBUG barbicanclient.v1.secrets [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/a33ca021-5324-4878-b6db-8d641bb03dba get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.484 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 3.0 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 177 KiB/s rd, 93 MiB/s wr, 260 op/s
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.505 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.506 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.529 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.529 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.553 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.554 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.576 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.577 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.597 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.598 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.615 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.616 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.634 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.634 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.655 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.656 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.684 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.685 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.712 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.713 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.729 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.730 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.751 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.751 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.774 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.775 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.799 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.800 239853 INFO barbicanclient.base [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Calculated Secrets uuid ref: secrets/a33ca021-5324-4878-b6db-8d641bb03dba#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.834 239853 DEBUG barbicanclient.client [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.835 239853 DEBUG nova.virt.libvirt.host [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:    <volume>255429c2-5a82-4a67-9bda-beb812b364b7</volume>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:52:05 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:52:05 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.861 239853 DEBUG nova.objects.instance [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'flavor' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.889 239853 DEBUG nova.virt.libvirt.driver [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Attempting to attach volume 255429c2-5a82-4a67-9bda-beb812b364b7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:52:05 np0005605476 nova_compute[239846]: 2026-02-02 17:52:05.893 239853 DEBUG nova.virt.libvirt.guest [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-255429c2-5a82-4a67-9bda-beb812b364b7">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <serial>255429c2-5a82-4a67-9bda-beb812b364b7</serial>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:52:05 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="4bade6e8-fdf9-43e9-9d6d-d9ef27dd7861"/>
Feb  2 12:52:05 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:52:05 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:05 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 3.0 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 139 KiB/s rd, 73 MiB/s wr, 203 op/s
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:08 np0005605476 nova_compute[239846]: 2026-02-02 17:52:08.269 239853 DEBUG nova.virt.libvirt.driver [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:08 np0005605476 nova_compute[239846]: 2026-02-02 17:52:08.269 239853 DEBUG nova.virt.libvirt.driver [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:08 np0005605476 nova_compute[239846]: 2026-02-02 17:52:08.269 239853 DEBUG nova.virt.libvirt.driver [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:08 np0005605476 nova_compute[239846]: 2026-02-02 17:52:08.270 239853 DEBUG nova.virt.libvirt.driver [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] No VIF found with MAC fa:16:3e:be:ad:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:52:08 np0005605476 nova_compute[239846]: 2026-02-02 17:52:08.421 239853 DEBUG oslo_concurrency.lockutils [None req-305f2f8f-4998-4baf-bbde-c2c91324ea65 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.160 239853 DEBUG oslo_concurrency.lockutils [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.161 239853 DEBUG oslo_concurrency.lockutils [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.175 239853 INFO nova.compute.manager [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Detaching volume 255429c2-5a82-4a67-9bda-beb812b364b7#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.356 239853 INFO nova.virt.block_device [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Attempting to driver detach volume 255429c2-5a82-4a67-9bda-beb812b364b7 from mountpoint /dev/vdb#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.461 239853 DEBUG os_brick.encryptors [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a33ca021-5324-4878-b6db-8d641bb03dba', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-255429c2-5a82-4a67-9bda-beb812b364b7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '255429c2-5a82-4a67-9bda-beb812b364b7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '58f005d9-a28a-4d78-894c-45ac84602542', 'attached_at': '', 'detached_at': '', 'volume_id': '255429c2-5a82-4a67-9bda-beb812b364b7', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.468 239853 DEBUG nova.virt.libvirt.driver [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Attempting to detach device vdb from instance 58f005d9-a28a-4d78-894c-45ac84602542 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.469 239853 DEBUG nova.virt.libvirt.guest [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-255429c2-5a82-4a67-9bda-beb812b364b7">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <serial>255429c2-5a82-4a67-9bda-beb812b364b7</serial>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="4bade6e8-fdf9-43e9-9d6d-d9ef27dd7861"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:52:09 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:09 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.477 239853 INFO nova.virt.libvirt.driver [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Successfully detached device vdb from instance 58f005d9-a28a-4d78-894c-45ac84602542 from the persistent domain config.#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.477 239853 DEBUG nova.virt.libvirt.driver [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 58f005d9-a28a-4d78-894c-45ac84602542 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.478 239853 DEBUG nova.virt.libvirt.guest [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-255429c2-5a82-4a67-9bda-beb812b364b7">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <serial>255429c2-5a82-4a67-9bda-beb812b364b7</serial>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:52:09 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="4bade6e8-fdf9-43e9-9d6d-d9ef27dd7861"/>
Feb  2 12:52:09 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:52:09 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:09 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:52:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 2.7 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 443 KiB/s rd, 96 MiB/s wr, 336 op/s
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.571 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054729.571179, 58f005d9-a28a-4d78-894c-45ac84602542 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.573 239853 DEBUG nova.virt.libvirt.driver [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 58f005d9-a28a-4d78-894c-45ac84602542 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.575 239853 INFO nova.virt.libvirt.driver [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Successfully detached device vdb from instance 58f005d9-a28a-4d78-894c-45ac84602542 from the live domain config.#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.719 239853 DEBUG nova.objects.instance [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'flavor' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.767 239853 DEBUG oslo_concurrency.lockutils [None req-000d9184-24b5-4144-ba1c-7e2a09e4337e 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:09 np0005605476 nova_compute[239846]: 2026-02-02 17:52:09.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.039 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.039 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.054 239853 DEBUG nova.objects.instance [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.089 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.303 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.303 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.303 239853 INFO nova.compute.manager [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Attaching volume 6807ba34-60a4-4ce9-9628-6fe672b41b3b to /dev/vdb#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.336 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.438 239853 DEBUG os_brick.utils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.439 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.446 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.446 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe364f2-de60-4a0a-adda-7d82c7dc29e5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.447 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.452 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.452 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[d251ab51-75f7-42cf-a101-ba1b83bad679]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.453 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.458 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.458 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[b41dbe15-419e-4fca-a443-7bb8161a7702]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.459 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c73360-6a20-4ee9-aa2a-329ed065f9ea]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.459 239853 DEBUG oslo_concurrency.processutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.477 239853 DEBUG oslo_concurrency.processutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.479 239853 DEBUG os_brick.initiator.connectors.lightos [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.479 239853 DEBUG os_brick.initiator.connectors.lightos [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.480 239853 DEBUG os_brick.initiator.connectors.lightos [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.480 239853 DEBUG os_brick.utils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] <== get_connector_properties: return (41ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.480 239853 DEBUG nova.virt.block_device [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating existing volume attachment record: 8b048f59-7789-4179-8856-779ca7222561 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:52:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Feb  2 12:52:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Feb  2 12:52:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.715 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.716 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.716 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.717 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.717 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.718 239853 INFO nova.compute.manager [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Terminating instance#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.720 239853 DEBUG nova.compute.manager [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:52:10 np0005605476 kernel: tap885b4958-c6 (unregistering): left promiscuous mode
Feb  2 12:52:10 np0005605476 NetworkManager[49022]: <info>  [1770054730.7627] device (tap885b4958-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.769 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:10Z|00138|binding|INFO|Releasing lport 885b4958-c65e-403a-a99d-2c07671482a7 from this chassis (sb_readonly=0)
Feb  2 12:52:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:10Z|00139|binding|INFO|Setting lport 885b4958-c65e-403a-a99d-2c07671482a7 down in Southbound
Feb  2 12:52:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:10Z|00140|binding|INFO|Removing iface tap885b4958-c6 ovn-installed in OVS
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.772 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.778 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:ad:69 10.100.0.10'], port_security=['fa:16:3e:be:ad:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '58f005d9-a28a-4d78-894c-45ac84602542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '579907b0a88b4f8b8769e75035c71cb0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a63db91b-feb8-4308-baa2-1840080b75f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5eea345e-6e1c-46cd-8a99-ce93e73e6294, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=885b4958-c65e-403a-a99d-2c07671482a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.779 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 885b4958-c65e-403a-a99d-2c07671482a7 in datapath 4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 unbound from our chassis#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.780 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.781 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.781 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[080e69ea-6ee4-4f3f-957f-c07aef0ed62f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.782 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 namespace which is not needed anymore#033[00m
Feb  2 12:52:10 np0005605476 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb  2 12:52:10 np0005605476 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 14.592s CPU time.
Feb  2 12:52:10 np0005605476 systemd-machined[208080]: Machine qemu-14-instance-0000000e terminated.
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [NOTICE]   (258347) : haproxy version is 2.8.14-c23fe91
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [NOTICE]   (258347) : path to executable is /usr/sbin/haproxy
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [WARNING]  (258347) : Exiting Master process...
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [WARNING]  (258347) : Exiting Master process...
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [ALERT]    (258347) : Current worker (258349) exited with code 143 (Terminated)
Feb  2 12:52:10 np0005605476 neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802[258343]: [WARNING]  (258347) : All workers exited. Exiting... (0)
Feb  2 12:52:10 np0005605476 systemd[1]: libpod-4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47.scope: Deactivated successfully.
Feb  2 12:52:10 np0005605476 podman[258580]: 2026-02-02 17:52:10.882180147 +0000 UTC m=+0.040514388 container died 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:52:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47-userdata-shm.mount: Deactivated successfully.
Feb  2 12:52:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d2cc05fcd9d47053287ba574162040e716aae4bb738fc9b35becf65279d3ad9b-merged.mount: Deactivated successfully.
Feb  2 12:52:10 np0005605476 podman[258580]: 2026-02-02 17:52:10.9129308 +0000 UTC m=+0.071265041 container cleanup 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 12:52:10 np0005605476 systemd[1]: libpod-conmon-4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47.scope: Deactivated successfully.
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.948 239853 INFO nova.virt.libvirt.driver [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Instance destroyed successfully.#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.948 239853 DEBUG nova.objects.instance [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lazy-loading 'resources' on Instance uuid 58f005d9-a28a-4d78-894c-45ac84602542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.962 239853 DEBUG nova.virt.libvirt.vif [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:51:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1441261',display_name='tempest-TestEncryptedCinderVolumes-server-1441261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1441261',id=14,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCss6AgPa7bURFSmlFx5e9TW4W97x0hTlX+lOzoWItgtIAQg82IfDa9TH2uKSgbrmUCoaez3KNng6Sw+wl5jyO6WgfkSYYkHE+yy0WX7Vs/I+XwoKTiVHjgf2XDq1ZSx4g==',key_name='tempest-keypair-1629260160',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:51:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='579907b0a88b4f8b8769e75035c71cb0',ramdisk_id='',reservation_id='r-8ud6l41s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1546153962',owner_user_name='tempest-TestEncryptedCinderVolumes-1546153962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:51:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='275a756bbf8748d6adfeb979b49b1846',uuid=58f005d9-a28a-4d78-894c-45ac84602542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.962 239853 DEBUG nova.network.os_vif_util [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converting VIF {"id": "885b4958-c65e-403a-a99d-2c07671482a7", "address": "fa:16:3e:be:ad:69", "network": {"id": "4d65fc0d-a384-439a-aa0f-b1fdb2ce9802", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1613607492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "579907b0a88b4f8b8769e75035c71cb0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap885b4958-c6", "ovs_interfaceid": "885b4958-c65e-403a-a99d-2c07671482a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.963 239853 DEBUG nova.network.os_vif_util [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.963 239853 DEBUG os_vif [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.964 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.964 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885b4958-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:10 np0005605476 podman[258607]: 2026-02-02 17:52:10.966186674 +0000 UTC m=+0.039568651 container remove 4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.966 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.969 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.970 239853 INFO os_vif [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:ad:69,bridge_name='br-int',has_traffic_filtering=True,id=885b4958-c65e-403a-a99d-2c07671482a7,network=Network(4d65fc0d-a384-439a-aa0f-b1fdb2ce9802),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap885b4958-c6')#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.970 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8db41165-98b3-4bc4-a7fc-34d1b65f9732]: (4, ('Mon Feb  2 05:52:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 (4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47)\n4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47\nMon Feb  2 05:52:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 (4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47)\n4f7ffb042f7e89e634f961692d5e52154fe93010936388695c7548ff92665a47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.972 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[125e1712-eaa0-49b6-9d61-9f2543fe1fac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.973 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d65fc0d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:10 np0005605476 kernel: tap4d65fc0d-a0: left promiscuous mode
Feb  2 12:52:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:10.983 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[aefc7ad9-d4fb-44fb-b348-2ceb66bbb810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:10 np0005605476 nova_compute[239846]: 2026-02-02 17:52:10.984 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:11.000 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7520335a-ea35-4d2d-af5e-749125dd738a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:11.002 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd6530a-7eeb-4324-b6d6-ba3f84119f68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:11.014 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[02cb2796-10c1-4d0d-88a7-7717afa20405]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394491, 'reachable_time': 15530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258649, 'error': None, 'target': 'ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:11 np0005605476 systemd[1]: run-netns-ovnmeta\x2d4d65fc0d\x2da384\x2d439a\x2daa0f\x2db1fdb2ce9802.mount: Deactivated successfully.
Feb  2 12:52:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:11.018 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d65fc0d-a384-439a-aa0f-b1fdb2ce9802 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:52:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:11.018 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[056b784c-8955-4c06-9dc6-baad9ab69ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:52:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2325132099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.252 239853 INFO nova.virt.libvirt.driver [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Deleting instance files /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542_del#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.252 239853 INFO nova.virt.libvirt.driver [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Deletion of /var/lib/nova/instances/58f005d9-a28a-4d78-894c-45ac84602542_del complete#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.307 239853 INFO nova.compute.manager [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.308 239853 DEBUG oslo.service.loopingcall [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.310 239853 DEBUG nova.compute.manager [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.311 239853 DEBUG nova.network.neutron [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.323 239853 DEBUG nova.objects.instance [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.346 239853 DEBUG nova.virt.libvirt.driver [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Attempting to attach volume 6807ba34-60a4-4ce9-9628-6fe672b41b3b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.349 239853 DEBUG nova.virt.libvirt.guest [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6807ba34-60a4-4ce9-9628-6fe672b41b3b">
Feb  2 12:52:11 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:52:11 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:11 np0005605476 nova_compute[239846]:  <serial>6807ba34-60a4-4ce9-9628-6fe672b41b3b</serial>
Feb  2 12:52:11 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:11 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.369 239853 DEBUG nova.compute.manager [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-unplugged-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.370 239853 DEBUG oslo_concurrency.lockutils [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.370 239853 DEBUG oslo_concurrency.lockutils [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.371 239853 DEBUG oslo_concurrency.lockutils [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.371 239853 DEBUG nova.compute.manager [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] No waiting events found dispatching network-vif-unplugged-885b4958-c65e-403a-a99d-2c07671482a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.371 239853 DEBUG nova.compute.manager [req-d3c0e395-8afe-4014-8914-6ffdbc6195a4 req-4a156285-7be0-4b84-8a47-2d9507dd7174 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-unplugged-885b4958-c65e-403a-a99d-2c07671482a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.468 239853 DEBUG nova.virt.libvirt.driver [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.468 239853 DEBUG nova.virt.libvirt.driver [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.468 239853 DEBUG nova.virt.libvirt.driver [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.468 239853 DEBUG nova.virt.libvirt.driver [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] No VIF found with MAC fa:16:3e:97:50:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:52:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 2.4 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 545 KiB/s rd, 128 MiB/s wr, 425 op/s
Feb  2 12:52:11 np0005605476 nova_compute[239846]: 2026-02-02 17:52:11.660 239853 DEBUG oslo_concurrency.lockutils [None req-4ce0b629-a682-4608-b45c-f52eab1c54be 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.079 239853 DEBUG nova.network.neutron [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.098 239853 INFO nova.compute.manager [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Took 0.79 seconds to deallocate network for instance.#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.146 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.146 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.153 239853 DEBUG nova.compute.manager [req-1dde8898-fc23-4b6e-ac66-f49ae606fee3 req-5f28decc-23d8-48d1-9bc1-9ae9da43c811 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-deleted-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.231 239853 DEBUG oslo_concurrency.processutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018697678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018697678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913960874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.797 239853 DEBUG oslo_concurrency.processutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.803 239853 DEBUG nova.compute.provider_tree [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.818 239853 DEBUG nova.scheduler.client.report [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.837 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.871 239853 INFO nova.scheduler.client.report [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Deleted allocations for instance 58f005d9-a28a-4d78-894c-45ac84602542#033[00m
Feb  2 12:52:12 np0005605476 nova_compute[239846]: 2026-02-02 17:52:12.930 239853 DEBUG oslo_concurrency.lockutils [None req-8838e29e-bb9f-404d-87ef-f454b4a27037 275a756bbf8748d6adfeb979b49b1846 579907b0a88b4f8b8769e75035c71cb0 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 2.4 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 441 KiB/s rd, 91 MiB/s wr, 267 op/s
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.498 239853 DEBUG nova.compute.manager [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.498 239853 DEBUG oslo_concurrency.lockutils [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "58f005d9-a28a-4d78-894c-45ac84602542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.498 239853 DEBUG oslo_concurrency.lockutils [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.499 239853 DEBUG oslo_concurrency.lockutils [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "58f005d9-a28a-4d78-894c-45ac84602542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.499 239853 DEBUG nova.compute.manager [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] No waiting events found dispatching network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.499 239853 WARNING nova.compute.manager [req-5b9b58d5-de45-4182-aacf-5fab10869952 req-6a1049e0-1bed-4592-b8c9-44d0d4c3eede e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Received unexpected event network-vif-plugged-885b4958-c65e-403a-a99d-2c07671482a7 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:52:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Feb  2 12:52:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Feb  2 12:52:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.892 239853 DEBUG oslo_concurrency.lockutils [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.894 239853 DEBUG oslo_concurrency.lockutils [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:13 np0005605476 nova_compute[239846]: 2026-02-02 17:52:13.908 239853 INFO nova.compute.manager [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Detaching volume 6807ba34-60a4-4ce9-9628-6fe672b41b3b#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.028 239853 INFO nova.virt.block_device [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Attempting to driver detach volume 6807ba34-60a4-4ce9-9628-6fe672b41b3b from mountpoint /dev/vdb#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.038 239853 DEBUG nova.virt.libvirt.driver [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Attempting to detach device vdb from instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.038 239853 DEBUG nova.virt.libvirt.guest [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6807ba34-60a4-4ce9-9628-6fe672b41b3b">
Feb  2 12:52:14 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <serial>6807ba34-60a4-4ce9-9628-6fe672b41b3b</serial>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:14 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.047 239853 INFO nova.virt.libvirt.driver [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully detached device vdb from instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c from the persistent domain config.#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.047 239853 DEBUG nova.virt.libvirt.driver [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.047 239853 DEBUG nova.virt.libvirt.guest [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6807ba34-60a4-4ce9-9628-6fe672b41b3b">
Feb  2 12:52:14 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <serial>6807ba34-60a4-4ce9-9628-6fe672b41b3b</serial>
Feb  2 12:52:14 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:52:14 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:52:14 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.154 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770054734.1543853, e834c41a-ab1b-421b-8fbc-afcb2d642a3c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.156 239853 DEBUG nova.virt.libvirt.driver [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.159 239853 INFO nova.virt.libvirt.driver [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully detached device vdb from instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c from the live domain config.#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.354 239853 DEBUG nova.objects.instance [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'flavor' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.394 239853 DEBUG oslo_concurrency.lockutils [None req-de16dba4-df48-4846-a762-db924a3ec3d7 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Feb  2 12:52:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Feb  2 12:52:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Feb  2 12:52:14 np0005605476 nova_compute[239846]: 2026-02-02 17:52:14.939 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1554607625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1554607625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4046094806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4046094806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 2 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 299 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1022 KiB/s rd, 22 MiB/s wr, 367 op/s
Feb  2 12:52:15 np0005605476 podman[258697]: 2026-02-02 17:52:15.599844063 +0000 UTC m=+0.047444393 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.824 239853 DEBUG nova.compute.manager [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.824 239853 DEBUG nova.compute.manager [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing instance network info cache due to event network-changed-7f07651d-c620-4f85-b534-2f5cc3d866d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.828 239853 DEBUG oslo_concurrency.lockutils [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.828 239853 DEBUG oslo_concurrency.lockutils [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.828 239853 DEBUG nova.network.neutron [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Refreshing network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.884 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.884 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.885 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.885 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.885 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.886 239853 INFO nova.compute.manager [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Terminating instance#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.887 239853 DEBUG nova.compute.manager [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:52:15 np0005605476 kernel: tap7f07651d-c6 (unregistering): left promiscuous mode
Feb  2 12:52:15 np0005605476 NetworkManager[49022]: <info>  [1770054735.9304] device (tap7f07651d-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.935 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:15Z|00141|binding|INFO|Releasing lport 7f07651d-c620-4f85-b534-2f5cc3d866d5 from this chassis (sb_readonly=0)
Feb  2 12:52:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:15Z|00142|binding|INFO|Setting lport 7f07651d-c620-4f85-b534-2f5cc3d866d5 down in Southbound
Feb  2 12:52:15 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:15Z|00143|binding|INFO|Removing iface tap7f07651d-c6 ovn-installed in OVS
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.938 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.944 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:50:54 10.100.0.6'], port_security=['fa:16:3e:97:50:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e834c41a-ab1b-421b-8fbc-afcb2d642a3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9c44462f87f421099e0b0d1376904c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c63f7b3b-d1b7-480e-bc0f-69ad7c8d6195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e81d0e0c-73b2-43ee-93af-f299a40e5ded, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=7f07651d-c620-4f85-b534-2f5cc3d866d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.946 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 7f07651d-c620-4f85-b534-2f5cc3d866d5 in datapath 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 unbound from our chassis#033[00m
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.946 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.949 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6#033[00m
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.963 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0fee09f3-a734-4dd4-9ca0-661011525bee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:15 np0005605476 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb  2 12:52:15 np0005605476 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.004s CPU time.
Feb  2 12:52:15 np0005605476 nova_compute[239846]: 2026-02-02 17:52:15.966 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:15 np0005605476 systemd-machined[208080]: Machine qemu-13-instance-0000000d terminated.
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.984 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[9dabddf9-01dc-403c-9d07-4cb886b76841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:15 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:15.988 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[9e575363-c0eb-4e64-aead-6d74afd20ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.002 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f08fa6fc-23ed-487e-b873-50a4668b3183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.014 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f5e05c-97c2-4483-b788-c1de519de7a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27d3f0a2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:1e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389118, 'reachable_time': 20773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258728, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.026 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[20b57ce0-76b8-4e64-8bbf-655da57ad665]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap27d3f0a2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389126, 'tstamp': 389126}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258729, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap27d3f0a2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389129, 'tstamp': 389129}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258729, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.028 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d3f0a2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.029 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.034 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.034 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d3f0a2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.035 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.035 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27d3f0a2-70, col_values=(('external_ids', {'iface-id': 'feaa395a-f5d1-49f8-90b4-f45ef83f72dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:16.036 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.119 239853 INFO nova.virt.libvirt.driver [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Instance destroyed successfully.#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.119 239853 DEBUG nova.objects.instance [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'resources' on Instance uuid e834c41a-ab1b-421b-8fbc-afcb2d642a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.152 239853 DEBUG nova.virt.libvirt.vif [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:51:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-27035982',display_name='tempest-TestStampPattern-server-27035982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-27035982',id=13,image_ref='9440fdc0-af14-4205-993a-98d6bf0736d2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:51:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-s3gt3fa5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c29c7ea2-29c6-40eb-a75b-289e533ecc64',image_min_disk='1',image_min_ram='0',image_owner_id='e9c44462f87f421099e0b0d1376904c4',image_owner_project_name='tempest-TestStampPattern-468537565',image_owner_user_name='tempest-TestStampPattern-468537565-project-member',image_user_id='35a3cbbc2e32427f9356703501969892',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:51:35Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=e834c41a-ab1b-421b-8fbc-afcb2d642a3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.152 239853 DEBUG nova.network.os_vif_util [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.153 239853 DEBUG nova.network.os_vif_util [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.154 239853 DEBUG os_vif [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.156 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.157 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f07651d-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.159 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.161 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.165 239853 INFO os_vif [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:50:54,bridge_name='br-int',has_traffic_filtering=True,id=7f07651d-c620-4f85-b534-2f5cc3d866d5,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f07651d-c6')#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.428 239853 INFO nova.virt.libvirt.driver [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Deleting instance files /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c_del#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.429 239853 INFO nova.virt.libvirt.driver [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Deletion of /var/lib/nova/instances/e834c41a-ab1b-421b-8fbc-afcb2d642a3c_del complete#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.484 239853 INFO nova.compute.manager [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.484 239853 DEBUG oslo.service.loopingcall [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.485 239853 DEBUG nova.compute.manager [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:52:16 np0005605476 nova_compute[239846]: 2026-02-02 17:52:16.485 239853 DEBUG nova.network.neutron [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:52:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Feb  2 12:52:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Feb  2 12:52:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.163 239853 DEBUG nova.network.neutron [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updated VIF entry in instance network info cache for port 7f07651d-c620-4f85-b534-2f5cc3d866d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.164 239853 DEBUG nova.network.neutron [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating instance_info_cache with network_info: [{"id": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "address": "fa:16:3e:97:50:54", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f07651d-c6", "ovs_interfaceid": "7f07651d-c620-4f85-b534-2f5cc3d866d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.192 239853 DEBUG oslo_concurrency.lockutils [req-444dd500-364e-4e11-b037-af867e2536cb req-bbbe4b02-e91c-4959-b550-58f0e5a13c9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-e834c41a-ab1b-421b-8fbc-afcb2d642a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:52:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 2 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 299 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 940 KiB/s rd, 349 KiB/s wr, 292 op/s
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.516 239853 DEBUG nova.network.neutron [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.547 239853 INFO nova.compute.manager [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Took 1.06 seconds to deallocate network for instance.#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.581 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.582 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.646 239853 DEBUG oslo_concurrency.processutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Feb  2 12:52:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.945 239853 DEBUG nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-vif-unplugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.946 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.946 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.947 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.948 239853 DEBUG nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] No waiting events found dispatching network-vif-unplugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.948 239853 WARNING nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received unexpected event network-vif-unplugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.949 239853 DEBUG nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.950 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.950 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.951 239853 DEBUG oslo_concurrency.lockutils [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.951 239853 DEBUG nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] No waiting events found dispatching network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.952 239853 WARNING nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received unexpected event network-vif-plugged-7f07651d-c620-4f85-b534-2f5cc3d866d5 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:52:17 np0005605476 nova_compute[239846]: 2026-02-02 17:52:17.952 239853 DEBUG nova.compute.manager [req-2a466f9f-bc02-425c-ad2b-730e782a0153 req-cdf8dd5d-427b-4ce2-83c3-211ab683c1b1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Received event network-vif-deleted-7f07651d-c620-4f85-b534-2f5cc3d866d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448564855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448564855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611037560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.178 239853 DEBUG oslo_concurrency.processutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.184 239853 DEBUG nova.compute.provider_tree [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.206 239853 DEBUG nova.scheduler.client.report [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.224 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.254 239853 INFO nova.scheduler.client.report [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Deleted allocations for instance e834c41a-ab1b-421b-8fbc-afcb2d642a3c#033[00m
Feb  2 12:52:18 np0005605476 nova_compute[239846]: 2026-02-02 17:52:18.328 239853 DEBUG oslo_concurrency.lockutils [None req-626534be-30ae-436b-acae-24f7e9ec9482 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "e834c41a-ab1b-421b-8fbc-afcb2d642a3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:19 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:19Z|00144|binding|INFO|Releasing lport feaa395a-f5d1-49f8-90b4-f45ef83f72dd from this chassis (sb_readonly=0)
Feb  2 12:52:19 np0005605476 nova_compute[239846]: 2026-02-02 17:52:19.132 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452516816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452516816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 1.6 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 1.0 MiB/s rd, 366 KiB/s wr, 467 op/s
Feb  2 12:52:19 np0005605476 podman[258783]: 2026-02-02 17:52:19.634219913 +0000 UTC m=+0.081012889 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Feb  2 12:52:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Feb  2 12:52:19 np0005605476 nova_compute[239846]: 2026-02-02 17:52:19.941 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142172768' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142172768' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Feb  2 12:52:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Feb  2 12:52:21 np0005605476 nova_compute[239846]: 2026-02-02 17:52:21.159 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Feb  2 12:52:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Feb  2 12:52:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Feb  2 12:52:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 892 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 20 KiB/s wr, 405 op/s
Feb  2 12:52:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Feb  2 12:52:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Feb  2 12:52:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Feb  2 12:52:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:22.888 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:52:22 np0005605476 nova_compute[239846]: 2026-02-02 17:52:22.888 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:22.890 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:52:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:22.892 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:23 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:23Z|00145|binding|INFO|Releasing lport feaa395a-f5d1-49f8-90b4-f45ef83f72dd from this chassis (sb_readonly=0)
Feb  2 12:52:23 np0005605476 nova_compute[239846]: 2026-02-02 17:52:23.168 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:23 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:23Z|00146|binding|INFO|Releasing lport feaa395a-f5d1-49f8-90b4-f45ef83f72dd from this chassis (sb_readonly=0)
Feb  2 12:52:23 np0005605476 nova_compute[239846]: 2026-02-02 17:52:23.468 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 892 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.7 KiB/s wr, 157 op/s
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.312 239853 DEBUG nova.compute.manager [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.312 239853 DEBUG nova.compute.manager [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing instance network info cache due to event network-changed-82cd628a-7fae-47cb-ba3b-d2c670304572. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.312 239853 DEBUG oslo_concurrency.lockutils [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.312 239853 DEBUG oslo_concurrency.lockutils [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.312 239853 DEBUG nova.network.neutron [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Refreshing network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.391 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.392 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.392 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.393 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.393 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.394 239853 INFO nova.compute.manager [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Terminating instance#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.395 239853 DEBUG nova.compute.manager [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:52:24 np0005605476 kernel: tap82cd628a-7f (unregistering): left promiscuous mode
Feb  2 12:52:24 np0005605476 NetworkManager[49022]: <info>  [1770054744.4399] device (tap82cd628a-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.447 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:24Z|00147|binding|INFO|Releasing lport 82cd628a-7fae-47cb-ba3b-d2c670304572 from this chassis (sb_readonly=0)
Feb  2 12:52:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:24Z|00148|binding|INFO|Setting lport 82cd628a-7fae-47cb-ba3b-d2c670304572 down in Southbound
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.449 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 ovn_controller[146041]: 2026-02-02T17:52:24Z|00149|binding|INFO|Removing iface tap82cd628a-7f ovn-installed in OVS
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.454 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:0a:fe 10.100.0.3'], port_security=['fa:16:3e:be:0a:fe 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c29c7ea2-29c6-40eb-a75b-289e533ecc64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9c44462f87f421099e0b0d1376904c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c63f7b3b-d1b7-480e-bc0f-69ad7c8d6195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e81d0e0c-73b2-43ee-93af-f299a40e5ded, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=82cd628a-7fae-47cb-ba3b-d2c670304572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.455 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 82cd628a-7fae-47cb-ba3b-d2c670304572 in datapath 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 unbound from our chassis#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.456 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.457 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b9f986-27e1-47ed-9761-2e9d56dd3f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.457 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 namespace which is not needed anymore#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.461 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb  2 12:52:24 np0005605476 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 16.595s CPU time.
Feb  2 12:52:24 np0005605476 systemd-machined[208080]: Machine qemu-11-instance-0000000b terminated.
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [NOTICE]   (256019) : haproxy version is 2.8.14-c23fe91
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [NOTICE]   (256019) : path to executable is /usr/sbin/haproxy
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [WARNING]  (256019) : Exiting Master process...
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [WARNING]  (256019) : Exiting Master process...
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [ALERT]    (256019) : Current worker (256021) exited with code 143 (Terminated)
Feb  2 12:52:24 np0005605476 neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6[256015]: [WARNING]  (256019) : All workers exited. Exiting... (0)
Feb  2 12:52:24 np0005605476 systemd[1]: libpod-76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae.scope: Deactivated successfully.
Feb  2 12:52:24 np0005605476 podman[258833]: 2026-02-02 17:52:24.569992371 +0000 UTC m=+0.040118508 container died 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:52:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae-userdata-shm.mount: Deactivated successfully.
Feb  2 12:52:24 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2fe45d0d5151e721dabccbb40d23cc4499ae504a0706b19625213f34cacefbb1-merged.mount: Deactivated successfully.
Feb  2 12:52:24 np0005605476 podman[258833]: 2026-02-02 17:52:24.607569438 +0000 UTC m=+0.077695555 container cleanup 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:52:24 np0005605476 systemd[1]: libpod-conmon-76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae.scope: Deactivated successfully.
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.621 239853 INFO nova.virt.libvirt.driver [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Instance destroyed successfully.#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.622 239853 DEBUG nova.objects.instance [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lazy-loading 'resources' on Instance uuid c29c7ea2-29c6-40eb-a75b-289e533ecc64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.634 239853 DEBUG nova.virt.libvirt.vif [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:50:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1184645063',display_name='tempest-TestStampPattern-server-1184645063',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1184645063',id=11,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGqFMmNb4ZAPk8RVu/FFMi3k6WI+izJKLyBxB69JpH7ilEv0u63uYq2zTj0Glbc+nwMtG/S4/tso6JPVtEY8X3OQR4PTeN4nDIhjWTck6bwXT8nLeJwKUp+diq1s2d6kw==',key_name='tempest-TestStampPattern-811527337',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:50:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e9c44462f87f421099e0b0d1376904c4',ramdisk_id='',reservation_id='r-m0cz5nux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-468537565',owner_user_name='tempest-TestStampPattern-468537565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:51:23Z,user_data=None,user_id='35a3cbbc2e32427f9356703501969892',uuid=c29c7ea2-29c6-40eb-a75b-289e533ecc64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.635 239853 DEBUG nova.network.os_vif_util [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converting VIF {"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.635 239853 DEBUG nova.network.os_vif_util [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.635 239853 DEBUG os_vif [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.636 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.637 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82cd628a-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.639 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.641 239853 INFO os_vif [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:0a:fe,bridge_name='br-int',has_traffic_filtering=True,id=82cd628a-7fae-47cb-ba3b-d2c670304572,network=Network(27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82cd628a-7f')#033[00m
Feb  2 12:52:24 np0005605476 podman[258872]: 2026-02-02 17:52:24.662038329 +0000 UTC m=+0.036297671 container remove 76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.665 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c2badf3b-0d67-43d9-b29e-7dbac7e89b5b]: (4, ('Mon Feb  2 05:52:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 (76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae)\n76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae\nMon Feb  2 05:52:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 (76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae)\n76214290bbdb121dfbfe3ec05e40e550d6f8a8a695d8d7d6d739ce25758022ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.667 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[28af9da6-2aed-45c7-a41e-c7edd942e167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.668 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d3f0a2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:52:24 np0005605476 kernel: tap27d3f0a2-70: left promiscuous mode
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.670 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.675 239853 DEBUG nova.compute.manager [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-unplugged-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.675 239853 DEBUG oslo_concurrency.lockutils [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.676 239853 DEBUG oslo_concurrency.lockutils [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.676 239853 DEBUG oslo_concurrency.lockutils [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.676 239853 DEBUG nova.compute.manager [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] No waiting events found dispatching network-vif-unplugged-82cd628a-7fae-47cb-ba3b-d2c670304572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.677 239853 DEBUG nova.compute.manager [req-bce8bf52-770f-4131-8adf-63941aec323a req-aeb5a73a-68f1-48f9-97c2-479933e0321c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-unplugged-82cd628a-7fae-47cb-ba3b-d2c670304572 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.679 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.683 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[63dcb5e6-c8c5-4bb3-816c-f0accd10aa91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.699 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[14e14f85-786d-4c42-8437-a4a7a5b46656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.701 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[afd7ee22-1093-467c-ba54-b77fd8c2169d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.715 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6829807b-c5c5-4f2f-b13c-3132d64ae577]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389113, 'reachable_time': 18595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258906, 'error': None, 'target': 'ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 systemd[1]: run-netns-ovnmeta\x2d27d3f0a2\x2d763e\x2d43a0\x2daeb8\x2db55aa1afb0d6.mount: Deactivated successfully.
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.718 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:52:24 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:24.719 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf0d7ec-a607-4609-9f14-59ee10f63280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:24 np0005605476 nova_compute[239846]: 2026-02-02 17:52:24.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.017 239853 INFO nova.virt.libvirt.driver [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Deleting instance files /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64_del#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.018 239853 INFO nova.virt.libvirt.driver [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Deletion of /var/lib/nova/instances/c29c7ea2-29c6-40eb-a75b-289e533ecc64_del complete#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.074 239853 INFO nova.compute.manager [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.075 239853 DEBUG oslo.service.loopingcall [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.075 239853 DEBUG nova.compute.manager [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.076 239853 DEBUG nova.network.neutron [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:52:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Feb  2 12:52:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Feb  2 12:52:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Feb  2 12:52:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 128 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 9.0 KiB/s wr, 256 op/s
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.947 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054730.9461908, 58f005d9-a28a-4d78-894c-45ac84602542 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.948 239853 INFO nova.compute.manager [-] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:52:25 np0005605476 nova_compute[239846]: 2026-02-02 17:52:25.967 239853 DEBUG nova.compute.manager [None req-e85d0852-b1c2-42ef-8e22-fe69cb9af825 - - - - - -] [instance: 58f005d9-a28a-4d78-894c-45ac84602542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.445 239853 DEBUG nova.network.neutron [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updated VIF entry in instance network info cache for port 82cd628a-7fae-47cb-ba3b-d2c670304572. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.445 239853 DEBUG nova.network.neutron [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [{"id": "82cd628a-7fae-47cb-ba3b-d2c670304572", "address": "fa:16:3e:be:0a:fe", "network": {"id": "27d3f0a2-763e-43a0-aeb8-b55aa1afb0d6", "bridge": "br-int", "label": "tempest-TestStampPattern-1300451668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e9c44462f87f421099e0b0d1376904c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82cd628a-7f", "ovs_interfaceid": "82cd628a-7fae-47cb-ba3b-d2c670304572", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.466 239853 DEBUG oslo_concurrency.lockutils [req-22f83eeb-2ca2-4b8a-9728-ca7ed28a5f06 req-9bb15ddb-00e9-48d7-a7bc-f6facb34490b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-c29c7ea2-29c6-40eb-a75b-289e533ecc64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.754 239853 DEBUG nova.compute.manager [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.754 239853 DEBUG oslo_concurrency.lockutils [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.755 239853 DEBUG oslo_concurrency.lockutils [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.755 239853 DEBUG oslo_concurrency.lockutils [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.755 239853 DEBUG nova.compute.manager [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] No waiting events found dispatching network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:52:26 np0005605476 nova_compute[239846]: 2026-02-02 17:52:26.756 239853 WARNING nova.compute.manager [req-42d0a8a0-31b8-4ea9-9f09-c94dd027be70 req-8810951b-c725-41d9-afb5-b5fb0497eaaf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received unexpected event network-vif-plugged-82cd628a-7fae-47cb-ba3b-d2c670304572 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:52:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 128 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.0 KiB/s wr, 116 op/s
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.563 239853 DEBUG nova.network.neutron [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.580 239853 INFO nova.compute.manager [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Took 2.50 seconds to deallocate network for instance.#033[00m
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.743 239853 DEBUG nova.compute.manager [req-7df41591-19b1-402c-9485-2dd02d213093 req-1920aaf3-ea62-414f-861f-bfc7f42362fb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Received event network-vif-deleted-82cd628a-7fae-47cb-ba3b-d2c670304572 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.760 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.761 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:27 np0005605476 nova_compute[239846]: 2026-02-02 17:52:27.812 239853 DEBUG oslo_concurrency.processutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250197968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.405 239853 DEBUG oslo_concurrency.processutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.412 239853 DEBUG nova.compute.provider_tree [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.429 239853 DEBUG nova.scheduler.client.report [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.448 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.489 239853 INFO nova.scheduler.client.report [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Deleted allocations for instance c29c7ea2-29c6-40eb-a75b-289e533ecc64#033[00m
Feb  2 12:52:28 np0005605476 nova_compute[239846]: 2026-02-02 17:52:28.550 239853 DEBUG oslo_concurrency.lockutils [None req-6f09ae7b-a7cf-43db-b344-adb4637d8351 35a3cbbc2e32427f9356703501969892 e9c44462f87f421099e0b0d1376904c4 - - default default] Lock "c29c7ea2-29c6-40eb-a75b-289e533ecc64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/794340519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/794340519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 90 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.9 KiB/s wr, 131 op/s
Feb  2 12:52:29 np0005605476 nova_compute[239846]: 2026-02-02 17:52:29.638 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:29 np0005605476 nova_compute[239846]: 2026-02-02 17:52:29.944 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Feb  2 12:52:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Feb  2 12:52:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Feb  2 12:52:30 np0005605476 nova_compute[239846]: 2026-02-02 17:52:30.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:31 np0005605476 nova_compute[239846]: 2026-02-02 17:52:31.117 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054736.11564, e834c41a-ab1b-421b-8fbc-afcb2d642a3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:52:31 np0005605476 nova_compute[239846]: 2026-02-02 17:52:31.118 239853 INFO nova.compute.manager [-] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:52:31 np0005605476 nova_compute[239846]: 2026-02-02 17:52:31.143 239853 DEBUG nova.compute.manager [None req-a654f51b-8fa9-43ba-aee2-f879807b875c - - - - - -] [instance: e834c41a-ab1b-421b-8fbc-afcb2d642a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:52:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Feb  2 12:52:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Feb  2 12:52:31 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.379566677 +0000 UTC m=+0.037900107 container create b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:31 np0005605476 systemd[1]: Started libpod-conmon-b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b.scope.
Feb  2 12:52:31 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.451735176 +0000 UTC m=+0.110068646 container init b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.456676395 +0000 UTC m=+0.115009805 container start b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.361009585 +0000 UTC m=+0.019343055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.45970721 +0000 UTC m=+0.118040690 container attach b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:31 np0005605476 thirsty_feistel[259092]: 167 167
Feb  2 12:52:31 np0005605476 systemd[1]: libpod-b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b.scope: Deactivated successfully.
Feb  2 12:52:31 np0005605476 conmon[259092]: conmon b539b031a1b8deefe6b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b.scope/container/memory.events
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.46256844 +0000 UTC m=+0.120901910 container died b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:52:31 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3fe6fac3944f4b0bff9d4539255ae56d7fb74039205582ad448d8cd17d8f93ba-merged.mount: Deactivated successfully.
Feb  2 12:52:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 90 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 5.2 KiB/s wr, 108 op/s
Feb  2 12:52:31 np0005605476 podman[259075]: 2026-02-02 17:52:31.497696758 +0000 UTC m=+0.156030188 container remove b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:52:31 np0005605476 systemd[1]: libpod-conmon-b539b031a1b8deefe6b74c26b52f616ba07305cd88ddf8a69c04da72edc7bd6b.scope: Deactivated successfully.
Feb  2 12:52:31 np0005605476 podman[259116]: 2026-02-02 17:52:31.614141041 +0000 UTC m=+0.041560849 container create bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:52:31 np0005605476 systemd[1]: Started libpod-conmon-bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0.scope.
Feb  2 12:52:31 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b86d1d9507850fd8253311761e5c7134caa8b0ae9f8077c016719565d52b551/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b86d1d9507850fd8253311761e5c7134caa8b0ae9f8077c016719565d52b551/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b86d1d9507850fd8253311761e5c7134caa8b0ae9f8077c016719565d52b551/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:31 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b86d1d9507850fd8253311761e5c7134caa8b0ae9f8077c016719565d52b551/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:31 np0005605476 podman[259116]: 2026-02-02 17:52:31.681595778 +0000 UTC m=+0.109015596 container init bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:52:31 np0005605476 podman[259116]: 2026-02-02 17:52:31.687314389 +0000 UTC m=+0.114734187 container start bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 12:52:31 np0005605476 podman[259116]: 2026-02-02 17:52:31.59310731 +0000 UTC m=+0.020527148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:31 np0005605476 podman[259116]: 2026-02-02 17:52:31.689770558 +0000 UTC m=+0.117190386 container attach bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]: [
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:    {
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "available": false,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "being_replaced": false,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "ceph_device_lvm": false,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "lsm_data": {},
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "lvs": [],
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "path": "/dev/sr0",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "rejected_reasons": [
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "Insufficient space (<5GB)",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "Has a FileSystem"
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        ],
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        "sys_api": {
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "actuators": null,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "device_nodes": [
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:                "sr0"
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            ],
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "devname": "sr0",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "human_readable_size": "482.00 KB",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "id_bus": "ata",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "model": "QEMU DVD-ROM",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "nr_requests": "2",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "parent": "/dev/sr0",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "partitions": {},
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "path": "/dev/sr0",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "removable": "1",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "rev": "2.5+",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "ro": "0",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "rotational": "1",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "sas_address": "",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "sas_device_handle": "",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "scheduler_mode": "mq-deadline",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "sectors": 0,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "sectorsize": "2048",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "size": 493568.0,
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "support_discard": "2048",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "type": "disk",
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:            "vendor": "QEMU"
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:        }
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]:    }
Feb  2 12:52:32 np0005605476 objective_hofstadter[259132]: ]
Feb  2 12:52:32 np0005605476 systemd[1]: libpod-bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0.scope: Deactivated successfully.
Feb  2 12:52:32 np0005605476 podman[259116]: 2026-02-02 17:52:32.152105695 +0000 UTC m=+0.579525493 container died bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:52:32 np0005605476 systemd[1]: var-lib-containers-storage-overlay-8b86d1d9507850fd8253311761e5c7134caa8b0ae9f8077c016719565d52b551-merged.mount: Deactivated successfully.
Feb  2 12:52:32 np0005605476 podman[259116]: 2026-02-02 17:52:32.191595436 +0000 UTC m=+0.619015234 container remove bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:52:32 np0005605476 systemd[1]: libpod-conmon-bd31bd8bcc82f3651f7c726e2888f10a130900793877718d1ab9f9bba157aad0.scope: Deactivated successfully.
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:52:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.576156217 +0000 UTC m=+0.023448800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.676246611 +0000 UTC m=+0.123539214 container create 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:52:32 np0005605476 systemd[1]: Started libpod-conmon-74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619.scope.
Feb  2 12:52:32 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.768681129 +0000 UTC m=+0.215973712 container init 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.774110942 +0000 UTC m=+0.221403505 container start 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:52:32 np0005605476 crazy_nobel[259939]: 167 167
Feb  2 12:52:32 np0005605476 systemd[1]: libpod-74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619.scope: Deactivated successfully.
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.777453856 +0000 UTC m=+0.224746439 container attach 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.777834776 +0000 UTC m=+0.225127339 container died 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:32 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d3cd9a08928c918eb7fcf43b8ccb015aa6270196db5ca913c3e55861a9fc1847-merged.mount: Deactivated successfully.
Feb  2 12:52:32 np0005605476 podman[259923]: 2026-02-02 17:52:32.811931485 +0000 UTC m=+0.259224048 container remove 74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:52:32 np0005605476 systemd[1]: libpod-conmon-74895b7bf2e16ebddfc83e9cdc6c24db87aed06ce269fea2a91b586500c74619.scope: Deactivated successfully.
Feb  2 12:52:32 np0005605476 podman[259962]: 2026-02-02 17:52:32.928038638 +0000 UTC m=+0.036141907 container create 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:52:32 np0005605476 systemd[1]: Started libpod-conmon-0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d.scope.
Feb  2 12:52:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:33 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:32.912161232 +0000 UTC m=+0.020264521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:33.029637294 +0000 UTC m=+0.137740603 container init 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:33.043105893 +0000 UTC m=+0.151209162 container start 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:33.046267492 +0000 UTC m=+0.154370761 container attach 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:52:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:52:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:33 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:52:33 np0005605476 magical_cerf[259979]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:52:33 np0005605476 magical_cerf[259979]: --> All data devices are unavailable
Feb  2 12:52:33 np0005605476 systemd[1]: libpod-0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d.scope: Deactivated successfully.
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:33.421395218 +0000 UTC m=+0.529498487 container died 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:33 np0005605476 systemd[1]: var-lib-containers-storage-overlay-56977bc3e5b75ff3732a333386775829841e38d94075e89b1cc7bfbed121fe3a-merged.mount: Deactivated successfully.
Feb  2 12:52:33 np0005605476 podman[259962]: 2026-02-02 17:52:33.467844624 +0000 UTC m=+0.575947893 container remove 0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 12:52:33 np0005605476 systemd[1]: libpod-conmon-0d2b9c60e86d56c4db7f111aaac080d81abff0722d4ce52734afd3e3e8a5e69d.scope: Deactivated successfully.
Feb  2 12:52:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 90 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.5 KiB/s wr, 64 op/s
Feb  2 12:52:33 np0005605476 nova_compute[239846]: 2026-02-02 17:52:33.515 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:33 np0005605476 nova_compute[239846]: 2026-02-02 17:52:33.570 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.903696287 +0000 UTC m=+0.051174590 container create f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:52:33 np0005605476 systemd[1]: Started libpod-conmon-f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce.scope.
Feb  2 12:52:33 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.979258521 +0000 UTC m=+0.126736844 container init f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.885418583 +0000 UTC m=+0.032896876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.987291267 +0000 UTC m=+0.134769530 container start f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.99129277 +0000 UTC m=+0.138771123 container attach f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:33 np0005605476 quirky_pare[260092]: 167 167
Feb  2 12:52:33 np0005605476 systemd[1]: libpod-f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce.scope: Deactivated successfully.
Feb  2 12:52:33 np0005605476 conmon[260092]: conmon f1e1334b6e52b0b32320 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce.scope/container/memory.events
Feb  2 12:52:33 np0005605476 podman[260075]: 2026-02-02 17:52:33.99416154 +0000 UTC m=+0.141639803 container died f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:52:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0da95af972fe3777ffda112e9542a618efc472df25e45c842c0108fa1e96edbb-merged.mount: Deactivated successfully.
Feb  2 12:52:34 np0005605476 podman[260075]: 2026-02-02 17:52:34.030842642 +0000 UTC m=+0.178320915 container remove f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:34 np0005605476 systemd[1]: libpod-conmon-f1e1334b6e52b0b323202a7ef4900b32303026f256443151838712ea784901ce.scope: Deactivated successfully.
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.201947412 +0000 UTC m=+0.056649384 container create 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:52:34 np0005605476 systemd[1]: Started libpod-conmon-505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6.scope.
Feb  2 12:52:34 np0005605476 nova_compute[239846]: 2026-02-02 17:52:34.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:34 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1458b013cb82d15d58a0b6199630e15750a28e74ba1600442ed80a3969c85ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1458b013cb82d15d58a0b6199630e15750a28e74ba1600442ed80a3969c85ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1458b013cb82d15d58a0b6199630e15750a28e74ba1600442ed80a3969c85ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:34 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1458b013cb82d15d58a0b6199630e15750a28e74ba1600442ed80a3969c85ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.176146147 +0000 UTC m=+0.030848179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.302022435 +0000 UTC m=+0.156724377 container init 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.309742532 +0000 UTC m=+0.164444494 container start 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.313492528 +0000 UTC m=+0.168194500 container attach 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]: {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    "0": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "devices": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "/dev/loop3"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            ],
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_name": "ceph_lv0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_size": "21470642176",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "name": "ceph_lv0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "tags": {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_name": "ceph",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.crush_device_class": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.encrypted": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.objectstore": "bluestore",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_id": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.vdo": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.with_tpm": "0"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            },
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "vg_name": "ceph_vg0"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        }
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    ],
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    "1": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "devices": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "/dev/loop4"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            ],
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_name": "ceph_lv1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_size": "21470642176",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "name": "ceph_lv1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "tags": {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_name": "ceph",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.crush_device_class": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.encrypted": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.objectstore": "bluestore",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_id": "1",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.vdo": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.with_tpm": "0"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            },
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "vg_name": "ceph_vg1"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        }
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    ],
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    "2": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "devices": [
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "/dev/loop5"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            ],
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_name": "ceph_lv2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_size": "21470642176",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "name": "ceph_lv2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "tags": {
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.cluster_name": "ceph",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.crush_device_class": "",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.encrypted": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.objectstore": "bluestore",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osd_id": "2",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.vdo": "0",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:                "ceph.with_tpm": "0"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            },
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "type": "block",
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:            "vg_name": "ceph_vg2"
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:        }
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]:    ]
Feb  2 12:52:34 np0005605476 goofy_bhabha[260131]: }
Feb  2 12:52:34 np0005605476 systemd[1]: libpod-505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6.scope: Deactivated successfully.
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.613223664 +0000 UTC m=+0.467925646 container died 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 12:52:34 np0005605476 nova_compute[239846]: 2026-02-02 17:52:34.640 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:34 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b1458b013cb82d15d58a0b6199630e15750a28e74ba1600442ed80a3969c85ec-merged.mount: Deactivated successfully.
Feb  2 12:52:34 np0005605476 podman[260115]: 2026-02-02 17:52:34.663464777 +0000 UTC m=+0.518166719 container remove 505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bhabha, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:52:34 np0005605476 systemd[1]: libpod-conmon-505a8f5ac544a44b3adbb64f7b59f02d0e231831c5c2cb0120d9e68ca0b66bc6.scope: Deactivated successfully.
Feb  2 12:52:34 np0005605476 nova_compute[239846]: 2026-02-02 17:52:34.946 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.061637701 +0000 UTC m=+0.039089380 container create c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:52:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/823413105' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:35 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/823413105' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:35 np0005605476 systemd[1]: Started libpod-conmon-c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853.scope.
Feb  2 12:52:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.1288411 +0000 UTC m=+0.106292779 container init c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.132560845 +0000 UTC m=+0.110012524 container start c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 12:52:35 np0005605476 jovial_hoover[260231]: 167 167
Feb  2 12:52:35 np0005605476 systemd[1]: libpod-c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853.scope: Deactivated successfully.
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.136138965 +0000 UTC m=+0.113590684 container attach c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.136414193 +0000 UTC m=+0.113865882 container died c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.042438951 +0000 UTC m=+0.019890680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:35 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7698d07d371feb37803167ce95716994de1a291139865e658594b6c731af9e22-merged.mount: Deactivated successfully.
Feb  2 12:52:35 np0005605476 podman[260215]: 2026-02-02 17:52:35.168280439 +0000 UTC m=+0.145732128 container remove c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:52:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:35 np0005605476 systemd[1]: libpod-conmon-c4274e36df29c1465549a4c668bab1384bfc111eec580fa601eb1032815d5853.scope: Deactivated successfully.
Feb  2 12:52:35 np0005605476 podman[260255]: 2026-02-02 17:52:35.31309608 +0000 UTC m=+0.048565736 container create 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:35 np0005605476 systemd[1]: Started libpod-conmon-11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283.scope.
Feb  2 12:52:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:52:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f2371d35c28811ff7a31a150ffa929fa27fe8af7f824b72cf7f697a70492c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f2371d35c28811ff7a31a150ffa929fa27fe8af7f824b72cf7f697a70492c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f2371d35c28811ff7a31a150ffa929fa27fe8af7f824b72cf7f697a70492c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f2371d35c28811ff7a31a150ffa929fa27fe8af7f824b72cf7f697a70492c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:52:35 np0005605476 podman[260255]: 2026-02-02 17:52:35.295703801 +0000 UTC m=+0.031173427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:52:35 np0005605476 podman[260255]: 2026-02-02 17:52:35.405902759 +0000 UTC m=+0.141372455 container init 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:52:35 np0005605476 podman[260255]: 2026-02-02 17:52:35.419468941 +0000 UTC m=+0.154938587 container start 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 12:52:35 np0005605476 podman[260255]: 2026-02-02 17:52:35.423681579 +0000 UTC m=+0.159151285 container attach 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:52:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.4 KiB/s wr, 115 op/s
Feb  2 12:52:36 np0005605476 lvm[260350]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:52:36 np0005605476 lvm[260350]: VG ceph_vg1 finished
Feb  2 12:52:36 np0005605476 lvm[260349]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:52:36 np0005605476 lvm[260349]: VG ceph_vg0 finished
Feb  2 12:52:36 np0005605476 lvm[260352]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:52:36 np0005605476 lvm[260352]: VG ceph_vg2 finished
Feb  2 12:52:36 np0005605476 silly_shirley[260271]: {}
Feb  2 12:52:36 np0005605476 systemd[1]: libpod-11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283.scope: Deactivated successfully.
Feb  2 12:52:36 np0005605476 podman[260255]: 2026-02-02 17:52:36.160289288 +0000 UTC m=+0.895758924 container died 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:36 np0005605476 systemd[1]: libpod-11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283.scope: Consumed 1.034s CPU time.
Feb  2 12:52:36 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d3f2371d35c28811ff7a31a150ffa929fa27fe8af7f824b72cf7f697a70492c7-merged.mount: Deactivated successfully.
Feb  2 12:52:36 np0005605476 podman[260255]: 2026-02-02 17:52:36.200742225 +0000 UTC m=+0.936211851 container remove 11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:52:36 np0005605476 systemd[1]: libpod-conmon-11f27d8da8f177e31ecf77bcd821d45b18a383454a68e233233d25ec0f682283.scope: Deactivated successfully.
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Feb  2 12:52:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.295 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.296 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.296 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:36 np0005605476 nova_compute[239846]: 2026-02-02 17:52:36.296 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 12:52:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:52:36
Feb  2 12:52:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:52:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:52:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Feb  2 12:52:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:52:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:52:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Feb  2 12:52:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Feb  2 12:52:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Feb  2 12:52:37 np0005605476 nova_compute[239846]: 2026-02-02 17:52:37.310 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 4.9 KiB/s wr, 64 op/s
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:52:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.275 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.276 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.276 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.276 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.277 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124720197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.804 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.957 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.958 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4393MB free_disk=59.98798755276948GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.958 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:38 np0005605476 nova_compute[239846]: 2026-02-02 17:52:38.959 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.183 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.184 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.378 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 6.4 KiB/s wr, 94 op/s
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.619 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054744.6183634, c29c7ea2-29c6-40eb-a75b-289e533ecc64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.620 239853 INFO nova.compute.manager [-] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.638 239853 DEBUG nova.compute.manager [None req-41430754-ba5c-49f3-9c94-c5ab250f07eb - - - - - -] [instance: c29c7ea2-29c6-40eb-a75b-289e533ecc64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.645 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29329843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.858 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.864 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.880 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.900 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.900 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:39 np0005605476 nova_compute[239846]: 2026-02-02 17:52:39.947 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 7.9 KiB/s wr, 120 op/s
Feb  2 12:52:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Feb  2 12:52:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Feb  2 12:52:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Feb  2 12:52:41 np0005605476 nova_compute[239846]: 2026-02-02 17:52:41.901 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 12:52:43 np0005605476 nova_compute[239846]: 2026-02-02 17:52:43.262 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 12:52:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644610000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644610000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.4 KiB/s wr, 76 op/s
Feb  2 12:52:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/336399756' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/336399756' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:44 np0005605476 nova_compute[239846]: 2026-02-02 17:52:44.647 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:44 np0005605476 nova_compute[239846]: 2026-02-02 17:52:44.950 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Feb  2 12:52:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Feb  2 12:52:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Feb  2 12:52:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 12 KiB/s wr, 204 op/s
Feb  2 12:52:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/137421827' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/137421827' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:46 np0005605476 nova_compute[239846]: 2026-02-02 17:52:46.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:52:46 np0005605476 podman[260435]: 2026-02-02 17:52:46.627342856 +0000 UTC m=+0.069889775 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:52:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:46.642 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:46.643 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:52:46.643 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560301176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560301176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 9.2 KiB/s wr, 161 op/s
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.925081690610476e-06 of space, bias 1.0, pg target 0.0014775245071831427 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00036085509707423606 of space, bias 1.0, pg target 0.10825652912227082 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5299185308044198e-06 of space, bias 1.0, pg target 0.00045897555924132597 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665222159026306 of space, bias 1.0, pg target 0.1999566647707892 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.406828194536439e-07 of space, bias 4.0, pg target 0.0010088193833443727 quantized to 16 (current 16)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:52:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:52:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657857270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657857270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 10 KiB/s wr, 247 op/s
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Feb  2 12:52:49 np0005605476 nova_compute[239846]: 2026-02-02 17:52:49.649 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913371405' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913371405' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:49 np0005605476 nova_compute[239846]: 2026-02-02 17:52:49.985 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:50 np0005605476 podman[260456]: 2026-02-02 17:52:50.625568629 +0000 UTC m=+0.076616435 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127)
Feb  2 12:52:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 12 KiB/s wr, 276 op/s
Feb  2 12:52:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.4 KiB/s wr, 160 op/s
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Feb  2 12:52:54 np0005605476 nova_compute[239846]: 2026-02-02 17:52:54.650 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852010473' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852010473' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:54 np0005605476 nova_compute[239846]: 2026-02-02 17:52:54.986 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:52:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 8.2 KiB/s wr, 243 op/s
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.678842) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775678879, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1287, "num_deletes": 257, "total_data_size": 1684722, "memory_usage": 1711744, "flush_reason": "Manual Compaction"}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775690450, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1652157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25298, "largest_seqno": 26584, "table_properties": {"data_size": 1645889, "index_size": 3471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14072, "raw_average_key_size": 20, "raw_value_size": 1633033, "raw_average_value_size": 2408, "num_data_blocks": 154, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054700, "oldest_key_time": 1770054700, "file_creation_time": 1770054775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 11678 microseconds, and 4181 cpu microseconds.
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.690512) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1652157 bytes OK
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.690544) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.694799) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.694856) EVENT_LOG_v1 {"time_micros": 1770054775694845, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.694889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1678708, prev total WAL file size 1678708, number of live WAL files 2.
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.695677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1613KB)], [56(10MB)]
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775695737, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12422448, "oldest_snapshot_seqno": -1}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5560 keys, 10743542 bytes, temperature: kUnknown
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775776574, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10743542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10699601, "index_size": 28919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 138416, "raw_average_key_size": 24, "raw_value_size": 10592788, "raw_average_value_size": 1905, "num_data_blocks": 1186, "num_entries": 5560, "num_filter_entries": 5560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.776873) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10743542 bytes
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.780517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(14.0) write-amplify(6.5) OK, records in: 6088, records dropped: 528 output_compression: NoCompression
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.780577) EVENT_LOG_v1 {"time_micros": 1770054775780554, "job": 30, "event": "compaction_finished", "compaction_time_micros": 80923, "compaction_time_cpu_micros": 37889, "output_level": 6, "num_output_files": 1, "total_output_size": 10743542, "num_input_records": 6088, "num_output_records": 5560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775781187, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054775783291, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.695593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.783355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.783362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.783366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.783369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:52:55.783373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3188741605' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3188741605' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635675108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635675108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 88 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.8 KiB/s wr, 133 op/s
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2070729993' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2070729993' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.681 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.681 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.697 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694846719' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694846719' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.837 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.838 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.846 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.846 239853 INFO nova.compute.claims [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:52:58 np0005605476 nova_compute[239846]: 2026-02-02 17:52:58.941 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022594787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.472 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.476 239853 DEBUG nova.compute.provider_tree [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.490 239853 DEBUG nova.scheduler.client.report [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:52:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 39 KiB/s wr, 220 op/s
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.511 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.511 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.566 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.566 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.587 239853 INFO nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.606 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.652 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.673 239853 INFO nova.virt.block_device [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Booting with volume 4533e74b-612a-4eac-8ecd-f83e365e6e1a at /dev/vda#033[00m
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2966476333' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:52:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2966476333' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.787 239853 DEBUG nova.policy [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.835 239853 DEBUG os_brick.utils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.835 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.845 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.846 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[702cea06-57d7-4749-9da1-541f4ceddd77]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.847 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.854 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.854 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3b97ab-5d75-46ff-9d62-c8882f3062b0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.855 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.860 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.861 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[9aba9892-d1a2-4410-b659-8ae341efa760]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.862 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[f990930b-8ef9-4f68-8f00-059a24447fc0]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.862 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.875 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.876 239853 DEBUG os_brick.initiator.connectors.lightos [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.877 239853 DEBUG os_brick.initiator.connectors.lightos [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.877 239853 DEBUG os_brick.initiator.connectors.lightos [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.877 239853 DEBUG os_brick.utils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.877 239853 DEBUG nova.virt.block_device [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Updating existing volume attachment record: 3d9202c7-6656-45ea-8fb2-b0b392b5f93c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:52:59 np0005605476 nova_compute[239846]: 2026-02-02 17:52:59.987 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.320 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Successfully created port: dd8bd692-fb2b-4d9b-a57d-7292316b5669 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2445681367' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.970 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.972 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.972 239853 INFO nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Creating image(s)#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.973 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.973 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Ensure instance console log exists: /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.974 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.974 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:00 np0005605476 nova_compute[239846]: 2026-02-02 17:53:00.974 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182592001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182592001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.350 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Successfully updated port: dd8bd692-fb2b-4d9b-a57d-7292316b5669 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.365 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.365 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.366 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.459 239853 DEBUG nova.compute.manager [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-changed-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.460 239853 DEBUG nova.compute.manager [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Refreshing instance network info cache due to event network-changed-dd8bd692-fb2b-4d9b-a57d-7292316b5669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.460 239853 DEBUG oslo_concurrency.lockutils [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:53:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 46 KiB/s wr, 208 op/s
Feb  2 12:53:01 np0005605476 nova_compute[239846]: 2026-02-02 17:53:01.523 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:53:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893900940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893900940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.652 239853 DEBUG nova.network.neutron [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Updating instance_info_cache with network_info: [{"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.682 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.683 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Instance network_info: |[{"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.684 239853 DEBUG oslo_concurrency.lockutils [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.685 239853 DEBUG nova.network.neutron [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Refreshing network info cache for port dd8bd692-fb2b-4d9b-a57d-7292316b5669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.695 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Start _get_guest_xml network_info=[{"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '3d9202c7-6656-45ea-8fb2-b0b392b5f93c', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '83d19a54-6f62-4d48-a43d-4cb27ceebbe3', 'attached_at': '', 'detached_at': '', 'volume_id': '4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'serial': '4533e74b-612a-4eac-8ecd-f83e365e6e1a'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.702 239853 WARNING nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.708 239853 DEBUG nova.virt.libvirt.host [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.709 239853 DEBUG nova.virt.libvirt.host [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.715 239853 DEBUG nova.virt.libvirt.host [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.716 239853 DEBUG nova.virt.libvirt.host [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.716 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.717 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.717 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.717 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.718 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.718 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.718 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.719 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.719 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.719 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.720 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.720 239853 DEBUG nova.virt.hardware [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.744 239853 DEBUG nova.storage.rbd_utils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:02 np0005605476 nova_compute[239846]: 2026-02-02 17:53:02.747 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078941719' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.265 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 38 KiB/s wr, 148 op/s
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.573 239853 DEBUG os_brick.encryptors [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Using volume encryption metadata '{'encryption_key_id': 'd971e481-0a0f-4160-9c8e-28ead2a11a63', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '83d19a54-6f62-4d48-a43d-4cb27ceebbe3', 'attached_at': '', 'detached_at': '', 'volume_id': '4533e74b-612a-4eac-8ecd-f83e365e6e1a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.576 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.608 239853 DEBUG barbicanclient.v1.secrets [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.609 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.631 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.632 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.656 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.656 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.693 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.694 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.727 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.728 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.769 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.770 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.794 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.794 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.822 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.823 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.904 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.904 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.933 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.934 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.986 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:03 np0005605476 nova_compute[239846]: 2026-02-02 17:53:03.986 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.004 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.004 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.035 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.035 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.068 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.069 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.095 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.096 239853 INFO barbicanclient.base [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Calculated Secrets uuid ref: secrets/d971e481-0a0f-4160-9c8e-28ead2a11a63#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.120 239853 DEBUG barbicanclient.client [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.120 239853 DEBUG nova.virt.libvirt.host [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <volume>4533e74b-612a-4eac-8ecd-f83e365e6e1a</volume>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:53:04 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:53:04 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539602575' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539602575' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.151 239853 DEBUG nova.virt.libvirt.vif [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-573332264',display_name='tempest-TestVolumeBootPattern-server-573332264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-573332264',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-5fatxrzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:52:59Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=83d19a54-6f62-4d48-a43d-4cb27ceebbe3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.152 239853 DEBUG nova.network.os_vif_util [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.153 239853 DEBUG nova.network.os_vif_util [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.154 239853 DEBUG nova.objects.instance [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.176 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <uuid>83d19a54-6f62-4d48-a43d-4cb27ceebbe3</uuid>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <name>instance-0000000f</name>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-server-573332264</nova:name>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:53:02</nova:creationTime>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <nova:port uuid="dd8bd692-fb2b-4d9b-a57d-7292316b5669">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="serial">83d19a54-6f62-4d48-a43d-4cb27ceebbe3</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="uuid">83d19a54-6f62-4d48-a43d-4cb27ceebbe3</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-4533e74b-612a-4eac-8ecd-f83e365e6e1a">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <serial>4533e74b-612a-4eac-8ecd-f83e365e6e1a</serial>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="0194b254-31db-4c9e-a550-978e5dd45aa4"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:c2:9d:e3"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <target dev="tapdd8bd692-fb"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/console.log" append="off"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:53:04 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:53:04 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:53:04 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:53:04 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.177 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Preparing to wait for external event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.177 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.177 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.178 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.178 239853 DEBUG nova.virt.libvirt.vif [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-573332264',display_name='tempest-TestVolumeBootPattern-server-573332264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-573332264',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-5fatxrzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:52:59Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=83d19a54-6f62-4d48-a43d-4cb27ceebbe3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.179 239853 DEBUG nova.network.os_vif_util [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.179 239853 DEBUG nova.network.os_vif_util [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.180 239853 DEBUG os_vif [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.181 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.181 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.182 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.185 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.185 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd8bd692-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.185 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd8bd692-fb, col_values=(('external_ids', {'iface-id': 'dd8bd692-fb2b-4d9b-a57d-7292316b5669', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:9d:e3', 'vm-uuid': '83d19a54-6f62-4d48-a43d-4cb27ceebbe3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.187 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 NetworkManager[49022]: <info>  [1770054784.1884] manager: (tapdd8bd692-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.190 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.193 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.194 239853 INFO os_vif [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb')#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.227 239853 DEBUG nova.network.neutron [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Updated VIF entry in instance network info cache for port dd8bd692-fb2b-4d9b-a57d-7292316b5669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.227 239853 DEBUG nova.network.neutron [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Updating instance_info_cache with network_info: [{"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.238 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.238 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.238 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:c2:9d:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.239 239853 INFO nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Using config drive#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.258 239853 DEBUG nova.storage.rbd_utils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.263 239853 DEBUG oslo_concurrency.lockutils [req-e4b1c126-2c7f-49f8-9d63-1067dc7118d2 req-35b94b34-a712-4bd1-9ab8-8e575ceaeff6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-83d19a54-6f62-4d48-a43d-4cb27ceebbe3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.587 239853 INFO nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Creating config drive at /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.591 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzuos_rls execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.713 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzuos_rls" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.738 239853 DEBUG nova.storage.rbd_utils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.742 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config 83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.851 239853 DEBUG oslo_concurrency.processutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config 83d19a54-6f62-4d48-a43d-4cb27ceebbe3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.852 239853 INFO nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Deleting local config drive /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3/disk.config because it was imported into RBD.#033[00m
Feb  2 12:53:04 np0005605476 kernel: tapdd8bd692-fb: entered promiscuous mode
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.891 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 NetworkManager[49022]: <info>  [1770054784.8925] manager: (tapdd8bd692-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Feb  2 12:53:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:04Z|00150|binding|INFO|Claiming lport dd8bd692-fb2b-4d9b-a57d-7292316b5669 for this chassis.
Feb  2 12:53:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:04Z|00151|binding|INFO|dd8bd692-fb2b-4d9b-a57d-7292316b5669: Claiming fa:16:3e:c2:9d:e3 10.100.0.9
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.896 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.906 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:9d:e3 10.100.0.9'], port_security=['fa:16:3e:c2:9d:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '83d19a54-6f62-4d48-a43d-4cb27ceebbe3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '896360ff-82ce-4969-a765-640e45612a7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=dd8bd692-fb2b-4d9b-a57d-7292316b5669) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.907 155391 INFO neutron.agent.ovn.metadata.agent [-] Port dd8bd692-fb2b-4d9b-a57d-7292316b5669 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.908 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.914 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[358118d7-a985-4ebc-b3e8-13f8abbdd7c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.914 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac1b83e6-81 in ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:53:04 np0005605476 systemd-machined[208080]: New machine qemu-15-instance-0000000f.
Feb  2 12:53:04 np0005605476 systemd-udevd[260626]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.916 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac1b83e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.916 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b716ddb4-d1b2-4512-af16-f38759524337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.917 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1af7cc91-e6b7-440c-aabb-dbfae0a4c98d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.924 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[bad48f79-9e44-4fda-8daf-bee4ce9de99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Feb  2 12:53:04 np0005605476 NetworkManager[49022]: <info>  [1770054784.9301] device (tapdd8bd692-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:53:04 np0005605476 NetworkManager[49022]: <info>  [1770054784.9308] device (tapdd8bd692-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.931 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:04Z|00152|binding|INFO|Setting lport dd8bd692-fb2b-4d9b-a57d-7292316b5669 ovn-installed in OVS
Feb  2 12:53:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:04Z|00153|binding|INFO|Setting lport dd8bd692-fb2b-4d9b-a57d-7292316b5669 up in Southbound
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.937 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.946 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[16d268b7-31cc-4928-b5ca-f80204557adb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3081029472' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3081029472' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.960 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3673f5ec-027d-4254-bec4-cc54e29c0cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.964 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[88832e30-6f92-4bc3-a898-6e4fa4d4568c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 NetworkManager[49022]: <info>  [1770054784.9654] manager: (tapac1b83e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.985 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[11e49c8c-9d65-4406-8df1-e96ece95ab4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:04 np0005605476 nova_compute[239846]: 2026-02-02 17:53:04.988 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:04.988 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[79677e8f-4153-4930-98b2-3ad7545dd165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 NetworkManager[49022]: <info>  [1770054785.0016] device (tapac1b83e6-80): carrier: link connected
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.005 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ada4ef-52f5-49b1-8fc0-18c2acbbece1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.017 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3476b8-4238-4000-9d94-873344e3b659]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402633, 'reachable_time': 26663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260658, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.028 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0e610761-90e3-4cc7-9c57-3c1c49822084]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:c725'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402633, 'tstamp': 402633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260659, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.038 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b12846b1-6923-4705-a191-fc9756cf7000]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402633, 'reachable_time': 26663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260660, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.057 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d0741b6e-e20d-45a7-b1ab-7d2467c271a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.094 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[85a7f85f-5864-4b18-85b5-eada12df99c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.095 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.096 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.096 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:05 np0005605476 kernel: tapac1b83e6-80: entered promiscuous mode
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.098 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:05 np0005605476 NetworkManager[49022]: <info>  [1770054785.0986] manager: (tapac1b83e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.100 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.101 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:05 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:05Z|00154|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.102 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.108 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.109 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.110 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[94e8afeb-ef96-48e1-82c2-0772133646e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.110 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:53:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:05.111 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'env', 'PROCESS_TAG=haproxy-ac1b83e6-8e85-484a-9623-8960b1107077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac1b83e6-8e85-484a-9623-8960b1107077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Feb  2 12:53:05 np0005605476 podman[260692]: 2026-02-02 17:53:05.400716883 +0000 UTC m=+0.042144376 container create d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:53:05 np0005605476 systemd[1]: Started libpod-conmon-d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6.scope.
Feb  2 12:53:05 np0005605476 podman[260692]: 2026-02-02 17:53:05.378139008 +0000 UTC m=+0.019566521 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:53:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.479 239853 DEBUG nova.compute.manager [req-82c40e94-350e-49ce-ba48-edfd2c77aea1 req-6754f501-1341-4855-9d81-128d03524475 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.480 239853 DEBUG oslo_concurrency.lockutils [req-82c40e94-350e-49ce-ba48-edfd2c77aea1 req-6754f501-1341-4855-9d81-128d03524475 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.480 239853 DEBUG oslo_concurrency.lockutils [req-82c40e94-350e-49ce-ba48-edfd2c77aea1 req-6754f501-1341-4855-9d81-128d03524475 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.480 239853 DEBUG oslo_concurrency.lockutils [req-82c40e94-350e-49ce-ba48-edfd2c77aea1 req-6754f501-1341-4855-9d81-128d03524475 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:05 np0005605476 nova_compute[239846]: 2026-02-02 17:53:05.480 239853 DEBUG nova.compute.manager [req-82c40e94-350e-49ce-ba48-edfd2c77aea1 req-6754f501-1341-4855-9d81-128d03524475 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Processing event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:53:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4780532cfdbfdb4a09987d786d070f35e57893a230031a33b83b739decee48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:05 np0005605476 podman[260692]: 2026-02-02 17:53:05.496720702 +0000 UTC m=+0.138148205 container init d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:53:05 np0005605476 podman[260692]: 2026-02-02 17:53:05.500745595 +0000 UTC m=+0.142173068 container start d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:53:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 40 KiB/s wr, 230 op/s
Feb  2 12:53:05 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [NOTICE]   (260712) : New worker (260714) forked
Feb  2 12:53:05 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [NOTICE]   (260712) : Loading success.
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023097021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Feb  2 12:53:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Feb  2 12:53:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.3 KiB/s wr, 122 op/s
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.627 239853 DEBUG nova.compute.manager [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.627 239853 DEBUG oslo_concurrency.lockutils [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.628 239853 DEBUG oslo_concurrency.lockutils [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.628 239853 DEBUG oslo_concurrency.lockutils [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.629 239853 DEBUG nova.compute.manager [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] No waiting events found dispatching network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.629 239853 WARNING nova.compute.manager [req-3e49eb46-63f6-44b1-955c-bf2498a7603b req-8dd02f45-a10e-43fa-9dad-cf5fe380a0c0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received unexpected event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.836 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.837 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054787.8367617, 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.837 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] VM Started (Lifecycle Event)#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.839 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.842 239853 INFO nova.virt.libvirt.driver [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Instance spawned successfully.#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.842 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.861 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.864 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.871 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.872 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.872 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.872 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.873 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.873 239853 DEBUG nova.virt.libvirt.driver [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.883 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.884 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054787.8369503, 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.884 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.940 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.943 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054787.8392477, 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.944 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.975 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:07 np0005605476 nova_compute[239846]: 2026-02-02 17:53:07.979 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:53:08 np0005605476 nova_compute[239846]: 2026-02-02 17:53:08.007 239853 INFO nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Took 7.04 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:53:08 np0005605476 nova_compute[239846]: 2026-02-02 17:53:08.008 239853 DEBUG nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:08 np0005605476 nova_compute[239846]: 2026-02-02 17:53:08.010 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:53:08 np0005605476 nova_compute[239846]: 2026-02-02 17:53:08.071 239853 INFO nova.compute.manager [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Took 9.33 seconds to build instance.#033[00m
Feb  2 12:53:08 np0005605476 nova_compute[239846]: 2026-02-02 17:53:08.091 239853 DEBUG oslo_concurrency.lockutils [None req-06fed808-ff26-447b-bd14-d63d0194f559 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Feb  2 12:53:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Feb  2 12:53:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Feb  2 12:53:09 np0005605476 nova_compute[239846]: 2026-02-02 17:53:09.189 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 29 KiB/s wr, 153 op/s
Feb  2 12:53:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:09 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1237979238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:09 np0005605476 nova_compute[239846]: 2026-02-02 17:53:09.991 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Feb  2 12:53:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Feb  2 12:53:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.635 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.635 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.636 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.636 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.637 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.638 239853 INFO nova.compute.manager [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Terminating instance#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.639 239853 DEBUG nova.compute.manager [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:53:10 np0005605476 kernel: tapdd8bd692-fb (unregistering): left promiscuous mode
Feb  2 12:53:10 np0005605476 NetworkManager[49022]: <info>  [1770054790.6872] device (tapdd8bd692-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.688 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:10Z|00155|binding|INFO|Releasing lport dd8bd692-fb2b-4d9b-a57d-7292316b5669 from this chassis (sb_readonly=0)
Feb  2 12:53:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:10Z|00156|binding|INFO|Setting lport dd8bd692-fb2b-4d9b-a57d-7292316b5669 down in Southbound
Feb  2 12:53:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:10Z|00157|binding|INFO|Removing iface tapdd8bd692-fb ovn-installed in OVS
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.698 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.708 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:10.714 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:9d:e3 10.100.0.9'], port_security=['fa:16:3e:c2:9d:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '83d19a54-6f62-4d48-a43d-4cb27ceebbe3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '896360ff-82ce-4969-a765-640e45612a7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=dd8bd692-fb2b-4d9b-a57d-7292316b5669) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:53:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:10.715 155391 INFO neutron.agent.ovn.metadata.agent [-] Port dd8bd692-fb2b-4d9b-a57d-7292316b5669 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:53:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:10.716 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac1b83e6-8e85-484a-9623-8960b1107077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:53:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:10.717 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[46c0a5be-9a19-4faa-aff3-47a03280430e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:10 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:10.718 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace which is not needed anymore#033[00m
Feb  2 12:53:10 np0005605476 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb  2 12:53:10 np0005605476 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.246s CPU time.
Feb  2 12:53:10 np0005605476 systemd-machined[208080]: Machine qemu-15-instance-0000000f terminated.
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [NOTICE]   (260712) : haproxy version is 2.8.14-c23fe91
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [NOTICE]   (260712) : path to executable is /usr/sbin/haproxy
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [WARNING]  (260712) : Exiting Master process...
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [WARNING]  (260712) : Exiting Master process...
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [ALERT]    (260712) : Current worker (260714) exited with code 143 (Terminated)
Feb  2 12:53:10 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[260708]: [WARNING]  (260712) : All workers exited. Exiting... (0)
Feb  2 12:53:10 np0005605476 systemd[1]: libpod-d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6.scope: Deactivated successfully.
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.859 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 podman[260790]: 2026-02-02 17:53:10.860340029 +0000 UTC m=+0.073016403 container died d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.863 239853 DEBUG nova.compute.manager [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-unplugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.864 239853 DEBUG oslo_concurrency.lockutils [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.864 239853 DEBUG oslo_concurrency.lockutils [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.864 239853 DEBUG oslo_concurrency.lockutils [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.864 239853 DEBUG nova.compute.manager [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] No waiting events found dispatching network-vif-unplugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.865 239853 DEBUG nova.compute.manager [req-38ffd3af-1dc6-4f66-bd53-0c3fead4a8ac req-2303a80a-d330-4bf9-a8e0-138428bd6671 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-unplugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.865 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.874 239853 INFO nova.virt.libvirt.driver [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Instance destroyed successfully.#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.875 239853 DEBUG nova.objects.instance [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.888 239853 DEBUG nova.virt.libvirt.vif [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-573332264',display_name='tempest-TestVolumeBootPattern-server-573332264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-573332264',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:53:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-5fatxrzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:53:08Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=83d19a54-6f62-4d48-a43d-4cb27ceebbe3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.889 239853 DEBUG nova.network.os_vif_util [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "address": "fa:16:3e:c2:9d:e3", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8bd692-fb", "ovs_interfaceid": "dd8bd692-fb2b-4d9b-a57d-7292316b5669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.890 239853 DEBUG nova.network.os_vif_util [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.890 239853 DEBUG os_vif [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.891 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.892 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd8bd692-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.893 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.894 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:10 np0005605476 nova_compute[239846]: 2026-02-02 17:53:10.896 239853 INFO os_vif [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:9d:e3,bridge_name='br-int',has_traffic_filtering=True,id=dd8bd692-fb2b-4d9b-a57d-7292316b5669,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8bd692-fb')#033[00m
Feb  2 12:53:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6-userdata-shm.mount: Deactivated successfully.
Feb  2 12:53:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4f4780532cfdbfdb4a09987d786d070f35e57893a230031a33b83b739decee48-merged.mount: Deactivated successfully.
Feb  2 12:53:10 np0005605476 podman[260790]: 2026-02-02 17:53:10.953525649 +0000 UTC m=+0.166201993 container cleanup d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 12:53:10 np0005605476 systemd[1]: libpod-conmon-d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6.scope: Deactivated successfully.
Feb  2 12:53:11 np0005605476 podman[260848]: 2026-02-02 17:53:11.009138173 +0000 UTC m=+0.037912087 container remove d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.012 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[54dc156e-ce31-4b51-a6f4-cca5d84ccce1]: (4, ('Mon Feb  2 05:53:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6)\nd999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6\nMon Feb  2 05:53:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (d999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6)\nd999ea2873acd28fdcbd37d978ca5bb2ef11417c185306ec75ea6199e1c290b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.014 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d886dea-c5cb-468e-8a19-cf070a7660ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.015 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:11 np0005605476 kernel: tapac1b83e6-80: left promiscuous mode
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.016 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.022 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.025 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[78df20b3-47bc-4044-adf4-ce11d352882a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.035 239853 INFO nova.virt.libvirt.driver [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Deleting instance files /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3_del#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.036 239853 INFO nova.virt.libvirt.driver [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Deletion of /var/lib/nova/instances/83d19a54-6f62-4d48-a43d-4cb27ceebbe3_del complete#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.038 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69682f04-f409-4b8b-a155-886e53ff47a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.039 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7a57aa67-2281-449a-b073-aa07763a3a4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.049 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[feddb9c9-e369-4451-b87e-36a7df646507]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402629, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260865, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 systemd[1]: run-netns-ovnmeta\x2dac1b83e6\x2d8e85\x2d484a\x2d9623\x2d8960b1107077.mount: Deactivated successfully.
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.052 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:53:11 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:11.052 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[be536471-a179-46b5-a050-725eff906cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.076 239853 INFO nova.compute.manager [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.076 239853 DEBUG oslo.service.loopingcall [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.077 239853 DEBUG nova.compute.manager [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.077 239853 DEBUG nova.network.neutron [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:53:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Feb  2 12:53:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Feb  2 12:53:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Feb  2 12:53:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 38 KiB/s wr, 71 op/s
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.795 239853 DEBUG nova.network.neutron [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.812 239853 INFO nova.compute.manager [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Took 0.73 seconds to deallocate network for instance.#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.884 239853 DEBUG nova.compute.manager [req-e2507dba-7277-441b-871f-4d5ec2d9fc8a req-7ba15422-142c-4124-b40f-37541b5290dc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-deleted-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.955 239853 INFO nova.compute.manager [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.998 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:11 np0005605476 nova_compute[239846]: 2026-02-02 17:53:11.998 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.041 239853 DEBUG oslo_concurrency.processutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:53:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/669697733' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.581 239853 DEBUG oslo_concurrency.processutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.586 239853 DEBUG nova.compute.provider_tree [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.604 239853 DEBUG nova.scheduler.client.report [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.627 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.658 239853 INFO nova.scheduler.client.report [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance 83d19a54-6f62-4d48-a43d-4cb27ceebbe3#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.718 239853 DEBUG oslo_concurrency.lockutils [None req-77f7c743-d76a-4a24-9e89-edb622d5d100 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.944 239853 DEBUG nova.compute.manager [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.944 239853 DEBUG oslo_concurrency.lockutils [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.944 239853 DEBUG oslo_concurrency.lockutils [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.945 239853 DEBUG oslo_concurrency.lockutils [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "83d19a54-6f62-4d48-a43d-4cb27ceebbe3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.945 239853 DEBUG nova.compute.manager [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] No waiting events found dispatching network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:12 np0005605476 nova_compute[239846]: 2026-02-02 17:53:12.945 239853 WARNING nova.compute.manager [req-6321b13c-ac05-4c4d-b0ee-3c9d6af972d8 req-25492ca3-699f-4398-8399-d6f780d0bd27 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Received unexpected event network-vif-plugged-dd8bd692-fb2b-4d9b-a57d-7292316b5669 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Feb  2 12:53:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 31 KiB/s wr, 57 op/s
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2218665962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2218665962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436152029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:14 np0005605476 nova_compute[239846]: 2026-02-02 17:53:14.992 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Feb  2 12:53:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Feb  2 12:53:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Feb  2 12:53:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 12 KiB/s wr, 225 op/s
Feb  2 12:53:15 np0005605476 nova_compute[239846]: 2026-02-02 17:53:15.895 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Feb  2 12:53:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Feb  2 12:53:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Feb  2 12:53:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 9.5 KiB/s wr, 191 op/s
Feb  2 12:53:17 np0005605476 podman[260888]: 2026-02-02 17:53:17.58967555 +0000 UTC m=+0.043929836 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/477562352' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/477562352' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 408 op/s
Feb  2 12:53:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464765203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464765203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:19 np0005605476 nova_compute[239846]: 2026-02-02 17:53:19.995 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Feb  2 12:53:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Feb  2 12:53:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Feb  2 12:53:20 np0005605476 nova_compute[239846]: 2026-02-02 17:53:20.898 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Feb  2 12:53:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Feb  2 12:53:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Feb  2 12:53:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 12 KiB/s wr, 296 op/s
Feb  2 12:53:21 np0005605476 podman[260906]: 2026-02-02 17:53:21.682792619 +0000 UTC m=+0.132580218 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1282157877' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1282157877' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539164401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Feb  2 12:53:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Feb  2 12:53:23 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Feb  2 12:53:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 12 KiB/s wr, 297 op/s
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1847955080' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1847955080' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:24 np0005605476 nova_compute[239846]: 2026-02-02 17:53:24.998 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Feb  2 12:53:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 5.2 MiB/s wr, 380 op/s
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188540737' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188540737' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:25 np0005605476 nova_compute[239846]: 2026-02-02 17:53:25.873 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054790.871762, 83d19a54-6f62-4d48-a43d-4cb27ceebbe3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:25 np0005605476 nova_compute[239846]: 2026-02-02 17:53:25.873 239853 INFO nova.compute.manager [-] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:53:25 np0005605476 nova_compute[239846]: 2026-02-02 17:53:25.894 239853 DEBUG nova.compute.manager [None req-4534d0ab-7219-4e45-9037-8671694aaaa4 - - - - - -] [instance: 83d19a54-6f62-4d48-a43d-4cb27ceebbe3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:25 np0005605476 nova_compute[239846]: 2026-02-02 17:53:25.899 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:26 np0005605476 nova_compute[239846]: 2026-02-02 17:53:26.962 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:26 np0005605476 nova_compute[239846]: 2026-02-02 17:53:26.963 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:26 np0005605476 nova_compute[239846]: 2026-02-02 17:53:26.988 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.084 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.084 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.092 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.092 239853 INFO nova.compute.claims [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.206 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 261 op/s
Feb  2 12:53:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:53:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257514639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.733 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.738 239853 DEBUG nova.compute.provider_tree [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.752 239853 DEBUG nova.scheduler.client.report [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.779 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.780 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.818 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.818 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.836 239853 INFO nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.855 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.920 239853 INFO nova.virt.block_device [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Booting with volume snapshot 2495ea47-a880-4c16-8cbe-8c124b9b943c at /dev/vda#033[00m
Feb  2 12:53:27 np0005605476 nova_compute[239846]: 2026-02-02 17:53:27.996 239853 DEBUG nova.policy [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:53:28 np0005605476 nova_compute[239846]: 2026-02-02 17:53:28.480 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Successfully created port: dc58cc1e-fb01-4551-8977-093651241115 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1932375557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1932375557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 3.6 MiB/s wr, 382 op/s
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.624 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Successfully updated port: dc58cc1e-fb01-4551-8977-093651241115 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.639 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.640 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.641 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.755 239853 DEBUG nova.compute.manager [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-changed-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.756 239853 DEBUG nova.compute.manager [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Refreshing instance network info cache due to event network-changed-dc58cc1e-fb01-4551-8977-093651241115. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.756 239853 DEBUG oslo_concurrency.lockutils [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:53:29 np0005605476 nova_compute[239846]: 2026-02-02 17:53:29.840 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.001 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Feb  2 12:53:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Feb  2 12:53:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.656 239853 DEBUG nova.network.neutron [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updating instance_info_cache with network_info: [{"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.674 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.675 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Instance network_info: |[{"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.675 239853 DEBUG oslo_concurrency.lockutils [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.676 239853 DEBUG nova.network.neutron [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Refreshing network info cache for port dc58cc1e-fb01-4551-8977-093651241115 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:53:30 np0005605476 nova_compute[239846]: 2026-02-02 17:53:30.903 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Feb  2 12:53:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Feb  2 12:53:31 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Feb  2 12:53:31 np0005605476 nova_compute[239846]: 2026-02-02 17:53:31.263 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 7.5 KiB/s wr, 152 op/s
Feb  2 12:53:31 np0005605476 nova_compute[239846]: 2026-02-02 17:53:31.731 239853 DEBUG nova.network.neutron [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updated VIF entry in instance network info cache for port dc58cc1e-fb01-4551-8977-093651241115. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:53:31 np0005605476 nova_compute[239846]: 2026-02-02 17:53:31.732 239853 DEBUG nova.network.neutron [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updating instance_info_cache with network_info: [{"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:31 np0005605476 nova_compute[239846]: 2026-02-02 17:53:31.748 239853 DEBUG oslo_concurrency.lockutils [req-a4e327df-dd1f-46f4-9b2c-cd6aa7223873 req-de0eb154-5d0b-4a42-b89a-0cb73af44d5b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.191 239853 DEBUG os_brick.utils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.191 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.200 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.200 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[02c6365b-5794-4ad2-975c-d6a5fb0fee55]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.201 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.206 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.206 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e982ee-241e-4cf9-8fa8-6e5cc7a800c4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.208 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.212 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.212 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[282709e5-f47e-4caa-93ef-9b555068fd7a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.213 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[85bebdff-6890-466b-b1e8-761e50f169bc]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.213 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.227 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.229 239853 DEBUG os_brick.initiator.connectors.lightos [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.229 239853 DEBUG os_brick.initiator.connectors.lightos [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.229 239853 DEBUG os_brick.initiator.connectors.lightos [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.230 239853 DEBUG os_brick.utils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (38ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:53:32 np0005605476 nova_compute[239846]: 2026-02-02 17:53:32.230 239853 DEBUG nova.virt.block_device [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updating existing volume attachment record: db76cee1-6442-43a2-b954-d4500d562365 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:53:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956813862' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4061584433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Feb  2 12:53:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Feb  2 12:53:33 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.334 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.335 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.336 239853 INFO nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Creating image(s)#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.336 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.336 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Ensure instance console log exists: /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.337 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.337 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.337 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.340 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Start _get_guest_xml network_info=[{"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': True, 'disk_bus': 'virtio', 'attachment_id': 'db76cee1-6442-43a2-b954-d4500d562365', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a528a063-0f57-4cdb-9ed5-bd76945a6312', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a528a063-0f57-4cdb-9ed5-bd76945a6312', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '860aa53f-382e-4f4e-98bd-bd89a752e349', 'attached_at': '', 'detached_at': '', 'volume_id': 'a528a063-0f57-4cdb-9ed5-bd76945a6312', 'serial': 'a528a063-0f57-4cdb-9ed5-bd76945a6312'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.344 239853 WARNING nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.354 239853 DEBUG nova.virt.libvirt.host [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.355 239853 DEBUG nova.virt.libvirt.host [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.361 239853 DEBUG nova.virt.libvirt.host [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.361 239853 DEBUG nova.virt.libvirt.host [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.362 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.362 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.362 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.362 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.363 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.363 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.363 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.363 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.363 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.364 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.364 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.364 239853 DEBUG nova.virt.hardware [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.385 239853 DEBUG nova.storage.rbd_utils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.389 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 9.1 KiB/s wr, 184 op/s
Feb  2 12:53:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222848723' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.897 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.921 239853 DEBUG nova.virt.libvirt.vif [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:53:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1326008283',display_name='tempest-TestVolumeBootPattern-server-1326008283',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1326008283',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-h0rlrwe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:53:27Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=860aa53f-382e-4f4e-98bd-bd89a752e349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.921 239853 DEBUG nova.network.os_vif_util [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.922 239853 DEBUG nova.network.os_vif_util [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.923 239853 DEBUG nova.objects.instance [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid 860aa53f-382e-4f4e-98bd-bd89a752e349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.935 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <uuid>860aa53f-382e-4f4e-98bd-bd89a752e349</uuid>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <name>instance-00000010</name>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-server-1326008283</nova:name>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:53:33</nova:creationTime>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <nova:port uuid="dc58cc1e-fb01-4551-8977-093651241115">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="serial">860aa53f-382e-4f4e-98bd-bd89a752e349</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="uuid">860aa53f-382e-4f4e-98bd-bd89a752e349</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-a528a063-0f57-4cdb-9ed5-bd76945a6312">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <serial>a528a063-0f57-4cdb-9ed5-bd76945a6312</serial>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:94:db:b5"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <target dev="tapdc58cc1e-fb"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/console.log" append="off"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:53:33 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:53:33 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:53:33 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:53:33 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.935 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Preparing to wait for external event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.935 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.936 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.936 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.936 239853 DEBUG nova.virt.libvirt.vif [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:53:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1326008283',display_name='tempest-TestVolumeBootPattern-server-1326008283',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1326008283',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-h0rlrwe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:53:27Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=860aa53f-382e-4f4e-98bd-bd89a752e349,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.937 239853 DEBUG nova.network.os_vif_util [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.937 239853 DEBUG nova.network.os_vif_util [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.938 239853 DEBUG os_vif [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.938 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.938 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.939 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.941 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.941 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc58cc1e-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.941 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc58cc1e-fb, col_values=(('external_ids', {'iface-id': 'dc58cc1e-fb01-4551-8977-093651241115', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:db:b5', 'vm-uuid': '860aa53f-382e-4f4e-98bd-bd89a752e349'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:33 np0005605476 NetworkManager[49022]: <info>  [1770054813.9437] manager: (tapdc58cc1e-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.945 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.946 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.947 239853 INFO os_vif [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb')#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.985 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.986 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.986 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:94:db:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:53:33 np0005605476 nova_compute[239846]: 2026-02-02 17:53:33.987 239853 INFO nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Using config drive#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.008 239853 DEBUG nova.storage.rbd_utils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Feb  2 12:53:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Feb  2 12:53:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.282 239853 INFO nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Creating config drive at /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.286 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3r6rrox0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.409 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3r6rrox0" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.429 239853 DEBUG nova.storage.rbd_utils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.432 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config 860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.557 239853 DEBUG oslo_concurrency.processutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config 860aa53f-382e-4f4e-98bd-bd89a752e349_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.558 239853 INFO nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Deleting local config drive /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349/disk.config because it was imported into RBD.#033[00m
Feb  2 12:53:34 np0005605476 kernel: tapdc58cc1e-fb: entered promiscuous mode
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.6059] manager: (tapdc58cc1e-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Feb  2 12:53:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:34Z|00158|binding|INFO|Claiming lport dc58cc1e-fb01-4551-8977-093651241115 for this chassis.
Feb  2 12:53:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:34Z|00159|binding|INFO|dc58cc1e-fb01-4551-8977-093651241115: Claiming fa:16:3e:94:db:b5 10.100.0.7
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.607 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.612 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:db:b5 10.100.0.7'], port_security=['fa:16:3e:94:db:b5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '860aa53f-382e-4f4e-98bd-bd89a752e349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '896360ff-82ce-4969-a765-640e45612a7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=dc58cc1e-fb01-4551-8977-093651241115) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.613 155391 INFO neutron.agent.ovn.metadata.agent [-] Port dc58cc1e-fb01-4551-8977-093651241115 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.614 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:53:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:34Z|00160|binding|INFO|Setting lport dc58cc1e-fb01-4551-8977-093651241115 ovn-installed in OVS
Feb  2 12:53:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:34Z|00161|binding|INFO|Setting lport dc58cc1e-fb01-4551-8977-093651241115 up in Southbound
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.616 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.624 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[48c7e7b0-b19e-4759-a2c8-eb9c40e2b700]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.625 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac1b83e6-81 in ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.628 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac1b83e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.628 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fdea9adc-9f15-44bd-a445-421c677aa272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.629 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d8f28b-f9dc-4a6a-b531-e440a0310179]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 systemd-udevd[261076]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:53:34 np0005605476 systemd-machined[208080]: New machine qemu-16-instance-00000010.
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.638 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[57e5bdd1-e458-4e55-9d8c-10e2bfcefd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.6524] device (tapdc58cc1e-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.6535] device (tapdc58cc1e-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.652 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[77fab2a7-a936-4a4e-bbe7-e7f6025441f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.672 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbe6c7-371e-427e-9d8d-166241d4a5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.676 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc7ebc1-7385-46a8-a053-4f1cbe85f24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.6784] manager: (tapac1b83e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Feb  2 12:53:34 np0005605476 systemd-udevd[261080]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.699 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6519006f-5b09-4b41-95e9-767697c35887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.702 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f72b9f43-2196-4249-a6dd-0975636373c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.7188] device (tapac1b83e6-80): carrier: link connected
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.723 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b7be045c-31eb-4fb8-bc82-937cd69c659a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.740 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbbd4a1-adfb-4c64-9fc1-3b074934686e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405605, 'reachable_time': 15454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261108, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.750 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9618135b-3d4f-45aa-9408-0554488b7de3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:c725'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405605, 'tstamp': 405605}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261109, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.767 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[61b3200d-a256-471e-bbd5-a977f71963e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405605, 'reachable_time': 15454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261110, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.786 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f83a4a76-eb2f-432d-aaa1-5b72379ef35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.830 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cc97b148-9c2f-4666-b32d-c7eb75bcd357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.832 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.833 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.833 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.836 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:34 np0005605476 NetworkManager[49022]: <info>  [1770054814.8365] manager: (tapac1b83e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Feb  2 12:53:34 np0005605476 kernel: tapac1b83e6-80: entered promiscuous mode
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.839 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.840 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:34 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:34Z|00162|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.842 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.843 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[07f14513-c4d6-42c8-aa2d-5a809e577ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.844 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:53:34 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:34.845 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'env', 'PROCESS_TAG=haproxy-ac1b83e6-8e85-484a-9623-8960b1107077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac1b83e6-8e85-484a-9623-8960b1107077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:53:34 np0005605476 nova_compute[239846]: 2026-02-02 17:53:34.850 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.002 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.090 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054815.0900874, 860aa53f-382e-4f4e-98bd-bd89a752e349 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.091 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] VM Started (Lifecycle Event)#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.119 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.123 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054815.090187, 860aa53f-382e-4f4e-98bd-bd89a752e349 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.123 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.142 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.146 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:53:35 np0005605476 podman[261184]: 2026-02-02 17:53:35.158123221 +0000 UTC m=+0.058386723 container create 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.167 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.194 239853 DEBUG nova.compute.manager [req-f246ce12-a9ed-4adf-a803-22d42faad96f req-dce183f3-ce98-478a-aff6-58c8bc8a9946 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.194 239853 DEBUG oslo_concurrency.lockutils [req-f246ce12-a9ed-4adf-a803-22d42faad96f req-dce183f3-ce98-478a-aff6-58c8bc8a9946 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.194 239853 DEBUG oslo_concurrency.lockutils [req-f246ce12-a9ed-4adf-a803-22d42faad96f req-dce183f3-ce98-478a-aff6-58c8bc8a9946 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.195 239853 DEBUG oslo_concurrency.lockutils [req-f246ce12-a9ed-4adf-a803-22d42faad96f req-dce183f3-ce98-478a-aff6-58c8bc8a9946 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.195 239853 DEBUG nova.compute.manager [req-f246ce12-a9ed-4adf-a803-22d42faad96f req-dce183f3-ce98-478a-aff6-58c8bc8a9946 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Processing event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:53:35 np0005605476 systemd[1]: Started libpod-conmon-85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267.scope.
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.196 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.200 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054815.2002068, 860aa53f-382e-4f4e-98bd-bd89a752e349 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.200 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.202 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.205 239853 INFO nova.virt.libvirt.driver [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Instance spawned successfully.#033[00m
Feb  2 12:53:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.206 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:53:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Feb  2 12:53:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Feb  2 12:53:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Feb  2 12:53:35 np0005605476 podman[261184]: 2026-02-02 17:53:35.124898327 +0000 UTC m=+0.025161819 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:53:35 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:35 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b29d1d00c00c467301a1c4ee0ea65f876141ee9495f1224d34bd7ca9d21ff4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:35 np0005605476 podman[261184]: 2026-02-02 17:53:35.239212961 +0000 UTC m=+0.139476503 container init 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.239 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.247 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:53:35 np0005605476 podman[261184]: 2026-02-02 17:53:35.247462673 +0000 UTC m=+0.147726205 container start 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.251 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.251 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.252 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.252 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.253 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.253 239853 DEBUG nova.virt.libvirt.driver [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:53:35 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [NOTICE]   (261203) : New worker (261205) forked
Feb  2 12:53:35 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [NOTICE]   (261203) : Loading success.
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.276 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.314 239853 INFO nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Took 1.98 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.315 239853 DEBUG nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.368 239853 INFO nova.compute.manager [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Took 8.32 seconds to build instance.#033[00m
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.381 239853 DEBUG oslo_concurrency.lockutils [None req-13deae7c-9ed7-4115-b228-9763233b5f66 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 10 KiB/s wr, 199 op/s
Feb  2 12:53:35 np0005605476 nova_compute[239846]: 2026-02-02 17:53:35.616 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:35.616 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:53:35 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:35.618 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:53:36 np0005605476 nova_compute[239846]: 2026-02-02 17:53:36.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188072221' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188072221' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:53:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:53:36
Feb  2 12:53:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:53:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:53:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta']
Feb  2 12:53:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:53:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.500 239853 DEBUG nova.compute.manager [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.501 239853 DEBUG oslo_concurrency.lockutils [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.501 239853 DEBUG oslo_concurrency.lockutils [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.501 239853 DEBUG oslo_concurrency.lockutils [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.502 239853 DEBUG nova.compute.manager [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] No waiting events found dispatching network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.502 239853 WARNING nova.compute.manager [req-059bb05c-c403-465a-a7e4-412ce7675e6c req-1b0c539f-b148-4e97-9e00-bec0e7fdc7c3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received unexpected event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 134 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 10 KiB/s wr, 201 op/s
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:53:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.689 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.690 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.690 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:53:37 np0005605476 nova_compute[239846]: 2026-02-02 17:53:37.690 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 860aa53f-382e-4f4e-98bd-bd89a752e349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:53:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.081627429 +0000 UTC m=+0.037264279 container create 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:53:38 np0005605476 systemd[1]: Started libpod-conmon-9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b.scope.
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/563639539' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/563639539' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.158406478 +0000 UTC m=+0.114043348 container init 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.064473527 +0000 UTC m=+0.020110407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.164763536 +0000 UTC m=+0.120400386 container start 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.16843754 +0000 UTC m=+0.124074390 container attach 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:53:38 np0005605476 wonderful_hofstadter[261445]: 167 167
Feb  2 12:53:38 np0005605476 systemd[1]: libpod-9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b.scope: Deactivated successfully.
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.170283522 +0000 UTC m=+0.125920372 container died 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:53:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9661b790365edbb3bd4c4ac7090649c4c3b0e67712bda76d81f0cba1d672c029-merged.mount: Deactivated successfully.
Feb  2 12:53:38 np0005605476 podman[261429]: 2026-02-02 17:53:38.20830428 +0000 UTC m=+0.163941130 container remove 9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:53:38 np0005605476 systemd[1]: libpod-conmon-9b3a87218c3bdd1ddba9b37b580f8b13aff1ab30c600e15d8de22960ee5edf1b.scope: Deactivated successfully.
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.374860173 +0000 UTC m=+0.061840210 container create fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:38 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:53:38 np0005605476 systemd[1]: Started libpod-conmon-fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e.scope.
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.342201645 +0000 UTC m=+0.029181772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.463082963 +0000 UTC m=+0.150063010 container init fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.468965418 +0000 UTC m=+0.155945445 container start fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.47221711 +0000 UTC m=+0.159197187 container attach fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:53:38 np0005605476 ecstatic_payne[261484]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:53:38 np0005605476 ecstatic_payne[261484]: --> All data devices are unavailable
Feb  2 12:53:38 np0005605476 systemd[1]: libpod-fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e.scope: Deactivated successfully.
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.925595666 +0000 UTC m=+0.612575713 container died fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 12:53:38 np0005605476 nova_compute[239846]: 2026-02-02 17:53:38.944 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ac9eba7d162c57af3f934c94d36cec5d46b97d3bf9ae5fe5c56020401b4d16cb-merged.mount: Deactivated successfully.
Feb  2 12:53:38 np0005605476 podman[261468]: 2026-02-02 17:53:38.967540045 +0000 UTC m=+0.654520072 container remove fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb  2 12:53:38 np0005605476 systemd[1]: libpod-conmon-fa89f4ab6ae6addf1dd8dbab5aadd38d0cda3c637ae4b8315f42234378a7912e.scope: Deactivated successfully.
Feb  2 12:53:38 np0005605476 nova_compute[239846]: 2026-02-02 17:53:38.985 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updating instance_info_cache with network_info: [{"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.016 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-860aa53f-382e-4f4e-98bd-bd89a752e349" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.017 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.017 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.018 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.047 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.048 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.048 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.048 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.048 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.075 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.076 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.076 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.077 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.077 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.079 239853 INFO nova.compute.manager [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Terminating instance#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.080 239853 DEBUG nova.compute.manager [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:53:39 np0005605476 kernel: tapdc58cc1e-fb (unregistering): left promiscuous mode
Feb  2 12:53:39 np0005605476 NetworkManager[49022]: <info>  [1770054819.1237] device (tapdc58cc1e-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.164 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:39Z|00163|binding|INFO|Releasing lport dc58cc1e-fb01-4551-8977-093651241115 from this chassis (sb_readonly=0)
Feb  2 12:53:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:39Z|00164|binding|INFO|Setting lport dc58cc1e-fb01-4551-8977-093651241115 down in Southbound
Feb  2 12:53:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:53:39Z|00165|binding|INFO|Removing iface tapdc58cc1e-fb ovn-installed in OVS
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.172 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.175 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:db:b5 10.100.0.7'], port_security=['fa:16:3e:94:db:b5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '860aa53f-382e-4f4e-98bd-bd89a752e349', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '896360ff-82ce-4969-a765-640e45612a7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=dc58cc1e-fb01-4551-8977-093651241115) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.176 155391 INFO neutron.agent.ovn.metadata.agent [-] Port dc58cc1e-fb01-4551-8977-093651241115 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.177 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac1b83e6-8e85-484a-9623-8960b1107077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.178 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[819ab922-d57b-4f96-91a9-88878e08b740]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.182 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace which is not needed anymore#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb  2 12:53:39 np0005605476 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 4.436s CPU time.
Feb  2 12:53:39 np0005605476 systemd-machined[208080]: Machine qemu-16-instance-00000010 terminated.
Feb  2 12:53:39 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [NOTICE]   (261203) : haproxy version is 2.8.14-c23fe91
Feb  2 12:53:39 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [NOTICE]   (261203) : path to executable is /usr/sbin/haproxy
Feb  2 12:53:39 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [WARNING]  (261203) : Exiting Master process...
Feb  2 12:53:39 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [ALERT]    (261203) : Current worker (261205) exited with code 143 (Terminated)
Feb  2 12:53:39 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[261199]: [WARNING]  (261203) : All workers exited. Exiting... (0)
Feb  2 12:53:39 np0005605476 systemd[1]: libpod-85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267.scope: Deactivated successfully.
Feb  2 12:53:39 np0005605476 podman[261608]: 2026-02-02 17:53:39.298713215 +0000 UTC m=+0.047433674 container died 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.317 239853 INFO nova.virt.libvirt.driver [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Instance destroyed successfully.#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.317 239853 DEBUG nova.objects.instance [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid 860aa53f-382e-4f4e-98bd-bd89a752e349 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.337 239853 DEBUG nova.virt.libvirt.vif [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:53:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1326008283',display_name='tempest-TestVolumeBootPattern-server-1326008283',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1326008283',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:53:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-h0rlrwe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:53:35Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=860aa53f-382e-4f4e-98bd-bd89a752e349,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267-userdata-shm.mount: Deactivated successfully.
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.338 239853 DEBUG nova.network.os_vif_util [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "dc58cc1e-fb01-4551-8977-093651241115", "address": "fa:16:3e:94:db:b5", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc58cc1e-fb", "ovs_interfaceid": "dc58cc1e-fb01-4551-8977-093651241115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.339 239853 DEBUG nova.network.os_vif_util [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0b29d1d00c00c467301a1c4ee0ea65f876141ee9495f1224d34bd7ca9d21ff4f-merged.mount: Deactivated successfully.
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.340 239853 DEBUG os_vif [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.343 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.343 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc58cc1e-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.346 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.347 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.350 239853 INFO os_vif [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:db:b5,bridge_name='br-int',has_traffic_filtering=True,id=dc58cc1e-fb01-4551-8977-093651241115,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc58cc1e-fb')#033[00m
Feb  2 12:53:39 np0005605476 podman[261608]: 2026-02-02 17:53:39.353614839 +0000 UTC m=+0.102335298 container cleanup 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:53:39 np0005605476 systemd[1]: libpod-conmon-85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267.scope: Deactivated successfully.
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.401878586 +0000 UTC m=+0.040264303 container create ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Feb  2 12:53:39 np0005605476 podman[261670]: 2026-02-02 17:53:39.434965126 +0000 UTC m=+0.060888373 container remove 85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.445 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[23977f43-cce7-4ad6-8f78-1db0e36c0b54]: (4, ('Mon Feb  2 05:53:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267)\n85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267\nMon Feb  2 05:53:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267)\n85a8011f5a437db3cbcac85eb1b0e1f75cacc90516a94afa50bdd102c78e6267\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.447 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[709deb77-02e8-4ff9-92c2-8b3cf72c86d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.448 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.449 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 kernel: tapac1b83e6-80: left promiscuous mode
Feb  2 12:53:39 np0005605476 systemd[1]: Started libpod-conmon-ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915.scope.
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.458 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.461 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[abd9f645-2af9-4462-a316-35c655f7b3a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.383095298 +0000 UTC m=+0.021481045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.485 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ad27c-f8dc-4473-9f5f-9b7af3b296b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.487 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[acf3515a-02b0-4034-9c42-188a47130d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.494435488 +0000 UTC m=+0.132821235 container init ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.500 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0310f4f-48f4-4b6c-a361-b575ea849433]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405600, 'reachable_time': 36178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261706, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.501595459 +0000 UTC m=+0.139981176 container start ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:53:39 np0005605476 systemd[1]: run-netns-ovnmeta\x2dac1b83e6\x2d8e85\x2d484a\x2d9623\x2d8960b1107077.mount: Deactivated successfully.
Feb  2 12:53:39 np0005605476 wizardly_gauss[261701]: 167 167
Feb  2 12:53:39 np0005605476 systemd[1]: libpod-ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915.scope: Deactivated successfully.
Feb  2 12:53:39 np0005605476 conmon[261701]: conmon ff81037bc7b290e6054e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915.scope/container/memory.events
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.505 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:53:39 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:39.505 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[e7945bfa-9556-46c3-a758-6ae3a3646f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.512155726 +0000 UTC m=+0.150541433 container attach ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.51301964 +0000 UTC m=+0.151405357 container died ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:53:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 41 KiB/s wr, 395 op/s
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.585 239853 DEBUG nova.compute.manager [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-unplugged-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.586 239853 DEBUG oslo_concurrency.lockutils [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.586 239853 DEBUG oslo_concurrency.lockutils [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.586 239853 DEBUG oslo_concurrency.lockutils [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.586 239853 DEBUG nova.compute.manager [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] No waiting events found dispatching network-vif-unplugged-dc58cc1e-fb01-4551-8977-093651241115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.587 239853 DEBUG nova.compute.manager [req-5b4a3dcd-bf17-480f-86c9-712e2a4afa33 req-99ba2953-c187-45e2-92ea-ce3db9f6f910 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-unplugged-dc58cc1e-fb01-4551-8977-093651241115 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: var-lib-containers-storage-overlay-742a645f684119fee0100d5d5da379b684a64a479c9ddec848b49b2b80df5e38-merged.mount: Deactivated successfully.
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/732924235' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:53:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3315963986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:53:39 np0005605476 podman[261654]: 2026-02-02 17:53:39.659860478 +0000 UTC m=+0.298246195 container remove ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gauss, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.671 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: libpod-conmon-ff81037bc7b290e6054e67eb30aed4a484b0fb3e43432a6ece1a598dc5858915.scope: Deactivated successfully.
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.759 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.760 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.777 239853 INFO nova.virt.libvirt.driver [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Deleting instance files /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349_del#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.777 239853 INFO nova.virt.libvirt.driver [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Deletion of /var/lib/nova/instances/860aa53f-382e-4f4e-98bd-bd89a752e349_del complete#033[00m
Feb  2 12:53:39 np0005605476 podman[261734]: 2026-02-02 17:53:39.81712186 +0000 UTC m=+0.051949232 container create f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.859 239853 INFO nova.compute.manager [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.860 239853 DEBUG oslo.service.loopingcall [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.860 239853 DEBUG nova.compute.manager [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.860 239853 DEBUG nova.network.neutron [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:53:39 np0005605476 systemd[1]: Started libpod-conmon-f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7.scope.
Feb  2 12:53:39 np0005605476 podman[261734]: 2026-02-02 17:53:39.789263296 +0000 UTC m=+0.024090678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:39 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f52b4bae8e007d11cd6b340675163954c58c4f1baf44141787afcb48ff47ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f52b4bae8e007d11cd6b340675163954c58c4f1baf44141787afcb48ff47ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f52b4bae8e007d11cd6b340675163954c58c4f1baf44141787afcb48ff47ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:39 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f52b4bae8e007d11cd6b340675163954c58c4f1baf44141787afcb48ff47ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:39 np0005605476 podman[261734]: 2026-02-02 17:53:39.928214203 +0000 UTC m=+0.163041585 container init f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:53:39 np0005605476 podman[261734]: 2026-02-02 17:53:39.932994757 +0000 UTC m=+0.167822109 container start f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.937 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.939 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4390MB free_disk=59.98796075861901GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.939 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:39 np0005605476 nova_compute[239846]: 2026-02-02 17:53:39.940 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:39 np0005605476 podman[261734]: 2026-02-02 17:53:39.940250421 +0000 UTC m=+0.175077783 container attach f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.008 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951400989' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951400989' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.038 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 860aa53f-382e-4f4e-98bd-bd89a752e349 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.039 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.040 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.078 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:40 np0005605476 frosty_brown[261751]: {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    "0": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "devices": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "/dev/loop3"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            ],
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_name": "ceph_lv0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_size": "21470642176",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "name": "ceph_lv0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "tags": {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_name": "ceph",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.crush_device_class": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.encrypted": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.objectstore": "bluestore",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_id": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.vdo": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.with_tpm": "0"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            },
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "vg_name": "ceph_vg0"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        }
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    ],
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    "1": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "devices": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "/dev/loop4"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            ],
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_name": "ceph_lv1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_size": "21470642176",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "name": "ceph_lv1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "tags": {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_name": "ceph",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.crush_device_class": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.encrypted": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.objectstore": "bluestore",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_id": "1",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.vdo": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.with_tpm": "0"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            },
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "vg_name": "ceph_vg1"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        }
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    ],
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    "2": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "devices": [
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "/dev/loop5"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            ],
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_name": "ceph_lv2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_size": "21470642176",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "name": "ceph_lv2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "tags": {
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.cluster_name": "ceph",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.crush_device_class": "",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.encrypted": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.objectstore": "bluestore",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osd_id": "2",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.vdo": "0",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:                "ceph.with_tpm": "0"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            },
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "type": "block",
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:            "vg_name": "ceph_vg2"
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:        }
Feb  2 12:53:40 np0005605476 frosty_brown[261751]:    ]
Feb  2 12:53:40 np0005605476 frosty_brown[261751]: }
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:40 np0005605476 systemd[1]: libpod-f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7.scope: Deactivated successfully.
Feb  2 12:53:40 np0005605476 conmon[261751]: conmon f610177a3bbb1b3adc86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7.scope/container/memory.events
Feb  2 12:53:40 np0005605476 podman[261734]: 2026-02-02 17:53:40.232363013 +0000 UTC m=+0.467190375 container died f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:53:40 np0005605476 podman[261734]: 2026-02-02 17:53:40.347806059 +0000 UTC m=+0.582633421 container remove f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:53:40 np0005605476 systemd[1]: libpod-conmon-f610177a3bbb1b3adc861cfaa44dc7ce0c4c6d666039522bb7e0aedc8adfbfd7.scope: Deactivated successfully.
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:53:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621286031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.613 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.619 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.640 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.668 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.669 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.705 239853 DEBUG nova.network.neutron [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.737 239853 INFO nova.compute.manager [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Took 0.88 seconds to deallocate network for instance.#033[00m
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.805485726 +0000 UTC m=+0.042772904 container create 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.811 239853 DEBUG nova.compute.manager [req-c0dff1ad-7dd6-4649-b477-30186a7999b6 req-48294caf-44f3-4007-8608-c4ae89ca8d26 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-deleted-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:40 np0005605476 systemd[1]: Started libpod-conmon-44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed.scope.
Feb  2 12:53:40 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.78145518 +0000 UTC m=+0.018742338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.888365565 +0000 UTC m=+0.125652723 container init 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.893418087 +0000 UTC m=+0.130705225 container start 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 12:53:40 np0005605476 focused_pasteur[261872]: 167 167
Feb  2 12:53:40 np0005605476 systemd[1]: libpod-44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed.scope: Deactivated successfully.
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.905817495 +0000 UTC m=+0.143104653 container attach 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.906565046 +0000 UTC m=+0.143852184 container died 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.914 239853 INFO nova.compute.manager [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:53:40 np0005605476 nova_compute[239846]: 2026-02-02 17:53:40.915 239853 DEBUG nova.compute.manager [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Deleting volume: a528a063-0f57-4cdb-9ed5-bd76945a6312 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 12:53:40 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7f67063b171cde300dc7fbc06d7a86db42cc78cd5ddbc385fe5d364be74b2ac2-merged.mount: Deactivated successfully.
Feb  2 12:53:40 np0005605476 podman[261855]: 2026-02-02 17:53:40.974410154 +0000 UTC m=+0.211697292 container remove 44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pasteur, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:53:40 np0005605476 systemd[1]: libpod-conmon-44378f300f05993f7e0585bbbb922d883732a886916fe3826084767393b720ed.scope: Deactivated successfully.
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.112 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.113 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.118209506 +0000 UTC m=+0.063586998 container create 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:53:41 np0005605476 systemd[1]: Started libpod-conmon-99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516.scope.
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.173 239853 DEBUG oslo_concurrency.processutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.086815024 +0000 UTC m=+0.032192486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:53:41 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:53:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2397d87dd555fcd76ab3a5988d2738b8d2b287b2980afb19958f4a34fe7b5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2397d87dd555fcd76ab3a5988d2738b8d2b287b2980afb19958f4a34fe7b5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2397d87dd555fcd76ab3a5988d2738b8d2b287b2980afb19958f4a34fe7b5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:41 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2397d87dd555fcd76ab3a5988d2738b8d2b287b2980afb19958f4a34fe7b5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.212120576 +0000 UTC m=+0.157498048 container init 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.219171995 +0000 UTC m=+0.164549447 container start 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.229896986 +0000 UTC m=+0.175274428 container attach 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:53:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 34 KiB/s wr, 388 op/s
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452260110' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452260110' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:41 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:41.621 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.664 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.664 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.689 239853 DEBUG nova.compute.manager [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.689 239853 DEBUG oslo_concurrency.lockutils [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.689 239853 DEBUG oslo_concurrency.lockutils [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.690 239853 DEBUG oslo_concurrency.lockutils [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.690 239853 DEBUG nova.compute.manager [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] No waiting events found dispatching network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.690 239853 WARNING nova.compute.manager [req-edc8d0ed-9de1-4cc7-805e-0170e71d50d2 req-9f8c575c-6c54-4cd8-9e4a-5c5ae4a4a476 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Received unexpected event network-vif-plugged-dc58cc1e-fb01-4551-8977-093651241115 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.711 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:41 np0005605476 lvm[262005]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:53:41 np0005605476 lvm[262005]: VG ceph_vg0 finished
Feb  2 12:53:41 np0005605476 lvm[262007]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:53:41 np0005605476 lvm[262007]: VG ceph_vg1 finished
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1021149445' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152763306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1021149445' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:41 np0005605476 lvm[262010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:53:41 np0005605476 lvm[262010]: VG ceph_vg2 finished
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.830 239853 DEBUG oslo_concurrency.processutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.836 239853 DEBUG nova.compute.provider_tree [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.861 239853 DEBUG nova.scheduler.client.report [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.889 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:41 np0005605476 focused_hamilton[261910]: {}
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.922 239853 INFO nova.scheduler.client.report [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance 860aa53f-382e-4f4e-98bd-bd89a752e349#033[00m
Feb  2 12:53:41 np0005605476 systemd[1]: libpod-99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516.scope: Deactivated successfully.
Feb  2 12:53:41 np0005605476 systemd[1]: libpod-99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516.scope: Consumed 1.037s CPU time.
Feb  2 12:53:41 np0005605476 podman[261894]: 2026-02-02 17:53:41.93895899 +0000 UTC m=+0.884336452 container died 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:53:41 np0005605476 nova_compute[239846]: 2026-02-02 17:53:41.999 239853 DEBUG oslo_concurrency.lockutils [None req-bd87ee59-cef5-4d82-a583-d4999ecb3515 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "860aa53f-382e-4f4e-98bd-bd89a752e349" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:42 np0005605476 systemd[1]: var-lib-containers-storage-overlay-0b2397d87dd555fcd76ab3a5988d2738b8d2b287b2980afb19958f4a34fe7b5e-merged.mount: Deactivated successfully.
Feb  2 12:53:42 np0005605476 podman[261894]: 2026-02-02 17:53:42.06878998 +0000 UTC m=+1.014167432 container remove 99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:53:42 np0005605476 systemd[1]: libpod-conmon-99c45ab7b6d4b31fdad36883c3a238fdc5e01e7a59c1d941aa748aa1b3768516.scope: Deactivated successfully.
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:53:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2391823395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2391823395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:43 np0005605476 nova_compute[239846]: 2026-02-02 17:53:43.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 30 KiB/s wr, 341 op/s
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Feb  2 12:53:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Feb  2 12:53:44 np0005605476 nova_compute[239846]: 2026-02-02 17:53:44.347 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122776808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122776808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Feb  2 12:53:44 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Feb  2 12:53:45 np0005605476 nova_compute[239846]: 2026-02-02 17:53:45.005 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Feb  2 12:53:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Feb  2 12:53:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Feb  2 12:53:45 np0005605476 nova_compute[239846]: 2026-02-02 17:53:45.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:53:45 np0005605476 nova_compute[239846]: 2026-02-02 17:53:45.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:53:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 119 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 12 KiB/s wr, 343 op/s
Feb  2 12:53:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:46.644 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:46.644 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:53:46.644 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:53:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Feb  2 12:53:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Feb  2 12:53:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 119 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 12 KiB/s wr, 343 op/s
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.433963744584616e-06 of space, bias 1.0, pg target 0.0019301891233753847 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006446624753872165 of space, bias 1.0, pg target 0.19339874261616496 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.1292951071783227e-06 of space, bias 1.0, pg target 0.0006387885321534968 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665611527911106 of space, bias 1.0, pg target 0.19996834583733317 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.264803666392209e-07 of space, bias 4.0, pg target 0.0008717764399670651 quantized to 16 (current 16)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:53:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:53:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Feb  2 12:53:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Feb  2 12:53:47 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Feb  2 12:53:48 np0005605476 podman[262050]: 2026-02-02 17:53:48.601644638 +0000 UTC m=+0.047913428 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:53:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Feb  2 12:53:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Feb  2 12:53:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530460145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530460145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:49 np0005605476 nova_compute[239846]: 2026-02-02 17:53:49.351 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 88 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 13 KiB/s wr, 370 op/s
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Feb  2 12:53:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Feb  2 12:53:50 np0005605476 nova_compute[239846]: 2026-02-02 17:53:50.007 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1988661982' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1988661982' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Feb  2 12:53:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Feb  2 12:53:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 13 KiB/s wr, 378 op/s
Feb  2 12:53:52 np0005605476 podman[262071]: 2026-02-02 17:53:52.640196162 +0000 UTC m=+0.092391248 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 12:53:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Feb  2 12:53:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Feb  2 12:53:52 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Feb  2 12:53:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 88 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 11 KiB/s wr, 316 op/s
Feb  2 12:53:54 np0005605476 nova_compute[239846]: 2026-02-02 17:53:54.315 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054819.312991, 860aa53f-382e-4f4e-98bd-bd89a752e349 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:53:54 np0005605476 nova_compute[239846]: 2026-02-02 17:53:54.315 239853 INFO nova.compute.manager [-] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:53:54 np0005605476 nova_compute[239846]: 2026-02-02 17:53:54.343 239853 DEBUG nova.compute.manager [None req-7442df97-c7bb-441e-9712-ad9aacbfb4f1 - - - - - -] [instance: 860aa53f-382e-4f4e-98bd-bd89a752e349] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:53:54 np0005605476 nova_compute[239846]: 2026-02-02 17:53:54.353 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652864257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652864257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:55 np0005605476 nova_compute[239846]: 2026-02-02 17:53:55.009 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:53:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Feb  2 12:53:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Feb  2 12:53:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Feb  2 12:53:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 3.7 MiB/s wr, 181 op/s
Feb  2 12:53:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112337624' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112337624' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 2.9 MiB/s wr, 142 op/s
Feb  2 12:53:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:53:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2565378157' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:53:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:53:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2565378157' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:53:59 np0005605476 nova_compute[239846]: 2026-02-02 17:53:59.356 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:53:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 2.7 MiB/s wr, 136 op/s
Feb  2 12:53:59 np0005605476 nova_compute[239846]: 2026-02-02 17:53:59.779 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:53:59 np0005605476 nova_compute[239846]: 2026-02-02 17:53:59.779 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:53:59 np0005605476 nova_compute[239846]: 2026-02-02 17:53:59.816 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.011 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.045 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.045 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.057 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.057 239853 INFO nova.compute.claims [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:54:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.332 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364429338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:54:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877967281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.924 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.931 239853 DEBUG nova.compute.provider_tree [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:54:00 np0005605476 nova_compute[239846]: 2026-02-02 17:54:00.965 239853 DEBUG nova.scheduler.client.report [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.051 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.052 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.154 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.155 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.190 239853 INFO nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.242 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.348 239853 INFO nova.virt.block_device [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Booting with volume 940f8ac7-d625-4924-b995-4acd1d4befc1 at /dev/vda#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.410 239853 DEBUG nova.policy [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.521 239853 DEBUG os_brick.utils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.522 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 2.5 MiB/s wr, 145 op/s
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.536 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.536 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd2a41d-71bf-4180-9753-57a3f8021ce8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.538 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.544 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.545 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc4390-ec27-4089-9297-294bc6653cce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.547 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.555 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.556 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[52a34779-25d5-431a-9e4c-5b32b14bd1de]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.557 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[95b243fb-9147-440e-8ea1-fe0e1d4a1175]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.558 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.577 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.580 239853 DEBUG os_brick.initiator.connectors.lightos [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.580 239853 DEBUG os_brick.initiator.connectors.lightos [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.580 239853 DEBUG os_brick.initiator.connectors.lightos [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.581 239853 DEBUG os_brick.utils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:54:01 np0005605476 nova_compute[239846]: 2026-02-02 17:54:01.581 239853 DEBUG nova.virt.block_device [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating existing volume attachment record: 282c6329-61d3-4960-b3f0-463f5cdc2c1d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:54:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Feb  2 12:54:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Feb  2 12:54:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Feb  2 12:54:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Feb  2 12:54:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154918713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Feb  2 12:54:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Feb  2 12:54:02 np0005605476 nova_compute[239846]: 2026-02-02 17:54:02.841 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Successfully created port: 48a7d2ef-4191-450c-b755-4c5e879a0285 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.288 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.289 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.289 239853 INFO nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Creating image(s)#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.290 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.290 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Ensure instance console log exists: /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.290 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.290 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:03 np0005605476 nova_compute[239846]: 2026-02-02 17:54:03.291 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.7 KiB/s wr, 59 op/s
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.191 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Successfully updated port: 48a7d2ef-4191-450c-b755-4c5e879a0285 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.256 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.256 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.256 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.360 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.645 239853 DEBUG nova.compute.manager [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-changed-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.645 239853 DEBUG nova.compute.manager [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Refreshing instance network info cache due to event network-changed-48a7d2ef-4191-450c-b755-4c5e879a0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.646 239853 DEBUG oslo_concurrency.lockutils [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:04 np0005605476 nova_compute[239846]: 2026-02-02 17:54:04.772 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.010 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689686366' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689686366' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Feb  2 12:54:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Feb  2 12:54:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.708 239853 DEBUG nova.network.neutron [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating instance_info_cache with network_info: [{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.813 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.813 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Instance network_info: |[{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.815 239853 DEBUG oslo_concurrency.lockutils [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.815 239853 DEBUG nova.network.neutron [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Refreshing network info cache for port 48a7d2ef-4191-450c-b755-4c5e879a0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.819 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Start _get_guest_xml network_info=[{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': True, 'disk_bus': 'virtio', 'attachment_id': '282c6329-61d3-4960-b3f0-463f5cdc2c1d', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-940f8ac7-d625-4924-b995-4acd1d4befc1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '940f8ac7-d625-4924-b995-4acd1d4befc1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a53bf075-1459-4c3e-a411-2ee0267d280a', 'attached_at': '', 'detached_at': '', 'volume_id': '940f8ac7-d625-4924-b995-4acd1d4befc1', 'serial': '940f8ac7-d625-4924-b995-4acd1d4befc1'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.824 239853 WARNING nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.831 239853 DEBUG nova.virt.libvirt.host [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.832 239853 DEBUG nova.virt.libvirt.host [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.837 239853 DEBUG nova.virt.libvirt.host [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.838 239853 DEBUG nova.virt.libvirt.host [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.838 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.838 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.839 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.839 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.839 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.839 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.839 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.840 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.840 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.840 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.840 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.840 239853 DEBUG nova.virt.hardware [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.940 239853 DEBUG nova.storage.rbd_utils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:05 np0005605476 nova_compute[239846]: 2026-02-02 17:54:05.943 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029386291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.502 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723304670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.581 239853 DEBUG nova.virt.libvirt.vif [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:53:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-213705024',display_name='tempest-TestVolumeBootPattern-volume-backed-server-213705024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-213705024',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCo+bZms1uWgtoO9xtR0soZQK4AH/2rpYkWJVnV3jxr7yl1icgiNFkifyBxQ9TjTMgkW7oRRaJJoS+pLaSs502TgdRV9mj2JCfdTmkSDSILI1onZ3oMMZof3bhJng3arrw==',key_name='tempest-keypair-619338074',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-z97ghp7q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:54:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=a53bf075-1459-4c3e-a411-2ee0267d280a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.582 239853 DEBUG nova.network.os_vif_util [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.583 239853 DEBUG nova.network.os_vif_util [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.585 239853 DEBUG nova.objects.instance [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid a53bf075-1459-4c3e-a411-2ee0267d280a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.629 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <uuid>a53bf075-1459-4c3e-a411-2ee0267d280a</uuid>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <name>instance-00000011</name>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-213705024</nova:name>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:54:05</nova:creationTime>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <nova:port uuid="48a7d2ef-4191-450c-b755-4c5e879a0285">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="serial">a53bf075-1459-4c3e-a411-2ee0267d280a</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="uuid">a53bf075-1459-4c3e-a411-2ee0267d280a</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-940f8ac7-d625-4924-b995-4acd1d4befc1">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <serial>940f8ac7-d625-4924-b995-4acd1d4befc1</serial>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:14:83:6f"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <target dev="tap48a7d2ef-41"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/console.log" append="off"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:54:06 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:54:06 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:54:06 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:54:06 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.631 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Preparing to wait for external event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.632 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.632 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.633 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.634 239853 DEBUG nova.virt.libvirt.vif [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:53:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-213705024',display_name='tempest-TestVolumeBootPattern-volume-backed-server-213705024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-213705024',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCo+bZms1uWgtoO9xtR0soZQK4AH/2rpYkWJVnV3jxr7yl1icgiNFkifyBxQ9TjTMgkW7oRRaJJoS+pLaSs502TgdRV9mj2JCfdTmkSDSILI1onZ3oMMZof3bhJng3arrw==',key_name='tempest-keypair-619338074',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-z97ghp7q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:54:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=a53bf075-1459-4c3e-a411-2ee0267d280a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.635 239853 DEBUG nova.network.os_vif_util [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.636 239853 DEBUG nova.network.os_vif_util [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.637 239853 DEBUG os_vif [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.638 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.638 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.639 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.644 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.644 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48a7d2ef-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.645 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap48a7d2ef-41, col_values=(('external_ids', {'iface-id': '48a7d2ef-4191-450c-b755-4c5e879a0285', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:83:6f', 'vm-uuid': 'a53bf075-1459-4c3e-a411-2ee0267d280a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.647 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:06 np0005605476 NetworkManager[49022]: <info>  [1770054846.6492] manager: (tap48a7d2ef-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.650 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.653 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.654 239853 INFO os_vif [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41')#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.831 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.831 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.831 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:14:83:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.832 239853 INFO nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Using config drive#033[00m
Feb  2 12:54:06 np0005605476 nova_compute[239846]: 2026-02-02 17:54:06.849 239853 DEBUG nova.storage.rbd_utils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.287 239853 DEBUG nova.network.neutron [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updated VIF entry in instance network info cache for port 48a7d2ef-4191-450c-b755-4c5e879a0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.288 239853 DEBUG nova.network.neutron [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating instance_info_cache with network_info: [{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.351 239853 INFO nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Creating config drive at /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config#033[00m
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.361 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp168h6hvn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.385 239853 DEBUG oslo_concurrency.lockutils [req-23f0c920-9131-42f6-b8db-d616b5813728 req-5170f381-0c00-4b9d-bdc0-52454e723669 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Feb  2 12:54:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.494 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp168h6hvn" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.572 239853 DEBUG nova.storage.rbd_utils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:07 np0005605476 nova_compute[239846]: 2026-02-02 17:54:07.575 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.100 239853 DEBUG oslo_concurrency.processutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config a53bf075-1459-4c3e-a411-2ee0267d280a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.101 239853 INFO nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Deleting local config drive /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a/disk.config because it was imported into RBD.#033[00m
Feb  2 12:54:08 np0005605476 kernel: tap48a7d2ef-41: entered promiscuous mode
Feb  2 12:54:08 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:08Z|00166|binding|INFO|Claiming lport 48a7d2ef-4191-450c-b755-4c5e879a0285 for this chassis.
Feb  2 12:54:08 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:08Z|00167|binding|INFO|48a7d2ef-4191-450c-b755-4c5e879a0285: Claiming fa:16:3e:14:83:6f 10.100.0.7
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.157 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.1606] manager: (tap48a7d2ef-41): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Feb  2 12:54:08 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:08Z|00168|binding|INFO|Setting lport 48a7d2ef-4191-450c-b755-4c5e879a0285 ovn-installed in OVS
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.165 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.166 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.169 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 systemd-machined[208080]: New machine qemu-17-instance-00000011.
Feb  2 12:54:08 np0005605476 systemd-udevd[262239]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:54:08 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:08Z|00169|binding|INFO|Setting lport 48a7d2ef-4191-450c-b755-4c5e879a0285 up in Southbound
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.194 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:83:6f 10.100.0.7'], port_security=['fa:16:3e:14:83:6f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a53bf075-1459-4c3e-a411-2ee0267d280a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3147ee48-b44e-4242-8857-9bd3cf787c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=48a7d2ef-4191-450c-b755-4c5e879a0285) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:54:08 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.195 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 48a7d2ef-4191-450c-b755-4c5e879a0285 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.196 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:54:08 np0005605476 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.2041] device (tap48a7d2ef-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.2051] device (tap48a7d2ef-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.204 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[61418a64-5d51-4934-8b5f-316fbe8cef4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.206 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac1b83e6-81 in ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.208 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac1b83e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.208 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d70b8ebc-a632-4e4d-bdbd-b74ede0f2cb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.208 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b2265fa5-17ba-4747-83f6-3cda2f5e7371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.216 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[d208a083-12d9-425e-a838-06c94b656df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.225 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[371bc99e-c052-4e60-ab97-a96b03f62d35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.246 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[bc50f83e-71ff-467e-91e7-ad05f835e203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.2505] manager: (tapac1b83e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.250 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2610e9b2-a0fe-4276-9a25-9ac26919e2de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.270 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[740ea945-ad8a-4f41-b984-963d7fba5056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.273 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0b3f53-c85d-4382-9329-09a162ae883b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.2848] device (tapac1b83e6-80): carrier: link connected
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.288 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cf85eb-8f51-4b07-87ba-d159eb26bd51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.298 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fb76e235-a2a6-4e39-9821-64c8fe83356b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408961, 'reachable_time': 27658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262273, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.305 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2dabf137-1b67-4faf-ace0-69c4f7fef04f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:c725'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408961, 'tstamp': 408961}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262274, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.318 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[066ffa41-d49e-4cd3-af65-ed96b9c3d04e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408961, 'reachable_time': 27658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262275, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.337 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[351e0e7f-26ce-48bf-adcb-92a99c4c0c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.372 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b498a8-f022-4181-8194-d649e36565e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.387 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.387 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.388 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.390 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 kernel: tapac1b83e6-80: entered promiscuous mode
Feb  2 12:54:08 np0005605476 NetworkManager[49022]: <info>  [1770054848.3927] manager: (tapac1b83e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.394 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.394 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:08 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:08Z|00170|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.404 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.405 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.406 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[db684a64-b3c8-45cb-9670-b09ec5bf23f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.407 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:54:08 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:08.408 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'env', 'PROCESS_TAG=haproxy-ac1b83e6-8e85-484a-9623-8960b1107077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac1b83e6-8e85-484a-9623-8960b1107077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:54:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Feb  2 12:54:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Feb  2 12:54:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.644 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054848.6442943, a53bf075-1459-4c3e-a411-2ee0267d280a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.646 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] VM Started (Lifecycle Event)#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.671 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.676 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054848.6451778, a53bf075-1459-4c3e-a411-2ee0267d280a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.676 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.825 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.830 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:54:08 np0005605476 podman[262349]: 2026-02-02 17:54:08.752354653 +0000 UTC m=+0.075249846 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.955 239853 DEBUG nova.compute.manager [req-51620c2d-b5bc-4862-93d4-425a25af14d8 req-9812353e-2ad9-40a2-b088-b921bfdaff49 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.955 239853 DEBUG oslo_concurrency.lockutils [req-51620c2d-b5bc-4862-93d4-425a25af14d8 req-9812353e-2ad9-40a2-b088-b921bfdaff49 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.956 239853 DEBUG oslo_concurrency.lockutils [req-51620c2d-b5bc-4862-93d4-425a25af14d8 req-9812353e-2ad9-40a2-b088-b921bfdaff49 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.956 239853 DEBUG oslo_concurrency.lockutils [req-51620c2d-b5bc-4862-93d4-425a25af14d8 req-9812353e-2ad9-40a2-b088-b921bfdaff49 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.956 239853 DEBUG nova.compute.manager [req-51620c2d-b5bc-4862-93d4-425a25af14d8 req-9812353e-2ad9-40a2-b088-b921bfdaff49 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Processing event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.958 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.961 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.965 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.966 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054848.960351, a53bf075-1459-4c3e-a411-2ee0267d280a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.966 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.970 239853 INFO nova.virt.libvirt.driver [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Instance spawned successfully.#033[00m
Feb  2 12:54:08 np0005605476 nova_compute[239846]: 2026-02-02 17:54:08.970 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.051 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.056 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:54:09 np0005605476 podman[262349]: 2026-02-02 17:54:09.09016359 +0000 UTC m=+0.413058363 container create 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.101 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.102 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.102 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.103 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.104 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.105 239853 DEBUG nova.virt.libvirt.driver [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.116 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:54:09 np0005605476 systemd[1]: Started libpod-conmon-17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6.scope.
Feb  2 12:54:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaeb776686caf92ef5b376434afaaae340ac927c211b24b64ff4002ee042e40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:09 np0005605476 podman[262349]: 2026-02-02 17:54:09.374568996 +0000 UTC m=+0.697463779 container init 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:54:09 np0005605476 podman[262349]: 2026-02-02 17:54:09.379868235 +0000 UTC m=+0.702763008 container start 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.382 239853 INFO nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Took 6.09 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.383 239853 DEBUG nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:09 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [NOTICE]   (262368) : New worker (262370) forked
Feb  2 12:54:09 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [NOTICE]   (262368) : Loading success.
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.466 239853 INFO nova.compute.manager [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Took 9.45 seconds to build instance.#033[00m
Feb  2 12:54:09 np0005605476 nova_compute[239846]: 2026-02-02 17:54:09.508 239853 DEBUG oslo_concurrency.lockutils [None req-9dc408ca-58d1-4066-a262-139e56145f72 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.0 KiB/s wr, 68 op/s
Feb  2 12:54:10 np0005605476 nova_compute[239846]: 2026-02-02 17:54:10.011 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Feb  2 12:54:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Feb  2 12:54:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.078 239853 DEBUG nova.compute.manager [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.078 239853 DEBUG oslo_concurrency.lockutils [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.079 239853 DEBUG oslo_concurrency.lockutils [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.079 239853 DEBUG oslo_concurrency.lockutils [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.079 239853 DEBUG nova.compute.manager [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] No waiting events found dispatching network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.079 239853 WARNING nova.compute.manager [req-6bc800dc-7fab-455b-8feb-3f2c2e65045b req-8c7a9ccf-7122-4227-a64d-c8d650ec0806 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received unexpected event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 for instance with vm_state active and task_state None.#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.087 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:11 np0005605476 NetworkManager[49022]: <info>  [1770054851.0932] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Feb  2 12:54:11 np0005605476 NetworkManager[49022]: <info>  [1770054851.0939] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Feb  2 12:54:11 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:11Z|00171|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.136 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.151 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 107 op/s
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.647 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.706 239853 DEBUG nova.compute.manager [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-changed-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.706 239853 DEBUG nova.compute.manager [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Refreshing instance network info cache due to event network-changed-48a7d2ef-4191-450c-b755-4c5e879a0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.706 239853 DEBUG oslo_concurrency.lockutils [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.707 239853 DEBUG oslo_concurrency.lockutils [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:11 np0005605476 nova_compute[239846]: 2026-02-02 17:54:11.707 239853 DEBUG nova.network.neutron [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Refreshing network info cache for port 48a7d2ef-4191-450c-b755-4c5e879a0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:54:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Feb  2 12:54:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Feb  2 12:54:12 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Feb  2 12:54:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2541200093' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 107 op/s
Feb  2 12:54:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Feb  2 12:54:13 np0005605476 nova_compute[239846]: 2026-02-02 17:54:13.980 239853 DEBUG nova.network.neutron [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updated VIF entry in instance network info cache for port 48a7d2ef-4191-450c-b755-4c5e879a0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:54:13 np0005605476 nova_compute[239846]: 2026-02-02 17:54:13.981 239853 DEBUG nova.network.neutron [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating instance_info_cache with network_info: [{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Feb  2 12:54:14 np0005605476 nova_compute[239846]: 2026-02-02 17:54:14.087 239853 DEBUG oslo_concurrency.lockutils [req-e465815c-c287-4440-af93-e7cb40b82ab5 req-b55047fd-fecb-4e3c-9821-4e0bab924944 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4047366527' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4047366527' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:15 np0005605476 nova_compute[239846]: 2026-02-02 17:54:15.013 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Feb  2 12:54:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Feb  2 12:54:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Feb  2 12:54:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.4 KiB/s wr, 240 op/s
Feb  2 12:54:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412631667' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/412631667' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:16 np0005605476 nova_compute[239846]: 2026-02-02 17:54:16.650 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Feb  2 12:54:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.5 KiB/s wr, 246 op/s
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786268253' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786268253' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 143 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 263 op/s
Feb  2 12:54:19 np0005605476 podman[262380]: 2026-02-02 17:54:19.630784198 +0000 UTC m=+0.078120397 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb  2 12:54:19 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:19Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:83:6f 10.100.0.7
Feb  2 12:54:19 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:19Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:83:6f 10.100.0.7
Feb  2 12:54:20 np0005605476 nova_compute[239846]: 2026-02-02 17:54:20.015 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Feb  2 12:54:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Feb  2 12:54:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Feb  2 12:54:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1898178461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 154 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.9 MiB/s wr, 228 op/s
Feb  2 12:54:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Feb  2 12:54:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Feb  2 12:54:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Feb  2 12:54:21 np0005605476 nova_compute[239846]: 2026-02-02 17:54:21.689 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Feb  2 12:54:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Feb  2 12:54:22 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Feb  2 12:54:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 154 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 3.0 MiB/s wr, 194 op/s
Feb  2 12:54:23 np0005605476 podman[262399]: 2026-02-02 17:54:23.630024188 +0000 UTC m=+0.074095344 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:54:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567574739' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Feb  2 12:54:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Feb  2 12:54:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Feb  2 12:54:25 np0005605476 nova_compute[239846]: 2026-02-02 17:54:25.059 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497016119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497016119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Feb  2 12:54:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Feb  2 12:54:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 721 KiB/s rd, 1.9 MiB/s wr, 136 op/s
Feb  2 12:54:26 np0005605476 nova_compute[239846]: 2026-02-02 17:54:26.693 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Feb  2 12:54:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Feb  2 12:54:27 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Feb  2 12:54:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 585 KiB/s rd, 1.5 MiB/s wr, 110 op/s
Feb  2 12:54:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 1.3 MiB/s wr, 114 op/s
Feb  2 12:54:30 np0005605476 nova_compute[239846]: 2026-02-02 17:54:30.060 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086789898' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Feb  2 12:54:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808050896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808050896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 29 KiB/s wr, 102 op/s
Feb  2 12:54:31 np0005605476 nova_compute[239846]: 2026-02-02 17:54:31.696 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Feb  2 12:54:31 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Feb  2 12:54:32 np0005605476 nova_compute[239846]: 2026-02-02 17:54:32.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Feb  2 12:54:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Feb  2 12:54:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Feb  2 12:54:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1639254744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1639254744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 29 KiB/s wr, 96 op/s
Feb  2 12:54:35 np0005605476 nova_compute[239846]: 2026-02-02 17:54:35.062 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Feb  2 12:54:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Feb  2 12:54:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Feb  2 12:54:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 14 KiB/s wr, 175 op/s
Feb  2 12:54:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:36 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3600059027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:36 np0005605476 nova_compute[239846]: 2026-02-02 17:54:36.699 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:54:36
Feb  2 12:54:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:54:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:54:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Feb  2 12:54:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:54:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Feb  2 12:54:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Feb  2 12:54:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 11 KiB/s wr, 143 op/s
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:54:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.388 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.388 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.389 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.389 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.389 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:54:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801510083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:54:38 np0005605476 nova_compute[239846]: 2026-02-02 17:54:38.986 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.021 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.023 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.178 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.178 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.183 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.348 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.349 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.352 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.354 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4201MB free_disk=59.98771141562611GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.354 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.358 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.358 239853 INFO nova.compute.claims [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:54:39 np0005605476 nova_compute[239846]: 2026-02-02 17:54:39.535 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 11 KiB/s wr, 143 op/s
Feb  2 12:54:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Feb  2 12:54:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Feb  2 12:54:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.064 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654340277' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.178 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.183 239853 DEBUG nova.compute.provider_tree [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.203 239853 DEBUG nova.scheduler.client.report [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809800198' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.230 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.231 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.233 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.296 239853 INFO nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.298 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.299 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.313 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance a53bf075-1459-4c3e-a411-2ee0267d280a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.314 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 62d2d76c-ea08-478d-abff-dd6c432e51af actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.314 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.314 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538205872' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.324 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538205872' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.379 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.411 239853 INFO nova.virt.block_device [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Booting with volume snapshot a28889c8-9a56-4085-96d5-3b9544c6ced9 at /dev/vda#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.556 239853 DEBUG nova.policy [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:54:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073396005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.924 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.928 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.953 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.983 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:54:40 np0005605476 nova_compute[239846]: 2026-02-02 17:54:40.983 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:41 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:41Z|00172|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.322 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Successfully created port: d6d55a9a-3209-4cd6-8a7b-4f61ea296ece _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:54:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Feb  2 12:54:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Feb  2 12:54:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Feb  2 12:54:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.0 KiB/s wr, 83 op/s
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.702 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.983 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.983 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.984 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:54:41 np0005605476 nova_compute[239846]: 2026-02-02 17:54:41.984 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.034 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.065 239853 DEBUG os_brick.utils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.065 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.074 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.074 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbd187e-58ba-4986-b3cc-00b7f7c852b4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.075 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.081 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.081 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1322dc-9c8b-408c-b6a7-da4693fc93a6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.082 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.088 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.089 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[c4506631-bee7-4c9b-b129-2adf39853264]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.091 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6757ad-8065-48b6-8cda-865ee6eb1821]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.091 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.111 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.113 239853 DEBUG os_brick.initiator.connectors.lightos [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.113 239853 DEBUG os_brick.initiator.connectors.lightos [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.113 239853 DEBUG os_brick.initiator.connectors.lightos [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.113 239853 DEBUG os_brick.utils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.114 239853 DEBUG nova.virt.block_device [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updating existing volume attachment record: 71d05216-25d3-44a5-8d72-f993f31a4e49 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:54:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Feb  2 12:54:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Feb  2 12:54:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.356 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.356 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.356 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.356 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid a53bf075-1459-4c3e-a411-2ee0267d280a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.500 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Successfully updated port: d6d55a9a-3209-4cd6-8a7b-4f61ea296ece _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.532 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.532 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.532 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.666 239853 DEBUG nova.compute.manager [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-changed-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.668 239853 DEBUG nova.compute.manager [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Refreshing instance network info cache due to event network-changed-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.668 239853 DEBUG oslo_concurrency.lockutils [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:42 np0005605476 nova_compute[239846]: 2026-02-02 17:54:42.765 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:54:42 np0005605476 podman[262596]: 2026-02-02 17:54:42.769946897 +0000 UTC m=+0.054585176 container exec 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:54:42 np0005605476 podman[262596]: 2026-02-02 17:54:42.861223873 +0000 UTC m=+0.145862122 container exec_died 49cf601899989c75e64c17f569867f9dc2bb2a6ce1f21706d993305a3a0b4d26 (image=quay.io/ceph/ceph:v20, name=ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:54:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4199428488' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.225 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.226 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.227 239853 INFO nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Creating image(s)#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.227 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.227 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Ensure instance console log exists: /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.228 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.228 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.228 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 51 op/s
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.614 239853 DEBUG nova.network.neutron [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updating instance_info_cache with network_info: [{"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.637 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.638 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Instance network_info: |[{"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.638 239853 DEBUG oslo_concurrency.lockutils [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.638 239853 DEBUG nova.network.neutron [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Refreshing network info cache for port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.641 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Start _get_guest_xml network_info=[{"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T17:54:30Z,direct_url=<?>,disk_format='qcow2',id=68879bae-06b2-4ebd-9426-376724b14bd8,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1291460922',owner='cdfa033071c341d29a9815152416777f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T17:54:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': True, 'disk_bus': 'virtio', 'attachment_id': '71d05216-25d3-44a5-8d72-f993f31a4e49', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-93100b3a-c311-4bac-931d-c3f35ef8736d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '93100b3a-c311-4bac-931d-c3f35ef8736d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '62d2d76c-ea08-478d-abff-dd6c432e51af', 'attached_at': '', 'detached_at': '', 'volume_id': '93100b3a-c311-4bac-931d-c3f35ef8736d', 'serial': '93100b3a-c311-4bac-931d-c3f35ef8736d'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.646 239853 WARNING nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.653 239853 DEBUG nova.virt.libvirt.host [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.654 239853 DEBUG nova.virt.libvirt.host [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.659 239853 DEBUG nova.virt.libvirt.host [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.660 239853 DEBUG nova.virt.libvirt.host [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.660 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.660 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T17:54:30Z,direct_url=<?>,disk_format='qcow2',id=68879bae-06b2-4ebd-9426-376724b14bd8,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1291460922',owner='cdfa033071c341d29a9815152416777f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T17:54:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.661 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.661 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.661 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.661 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.662 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.662 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.662 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.662 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.663 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.663 239853 DEBUG nova.virt.hardware [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.686 239853 DEBUG nova.storage.rbd_utils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.690 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.758 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating instance_info_cache with network_info: [{"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.776 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-a53bf075-1459-4c3e-a411-2ee0267d280a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.777 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.777 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:43 np0005605476 nova_compute[239846]: 2026-02-02 17:54:43.777 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:54:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:54:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046351105' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.247 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.281 239853 DEBUG nova.virt.libvirt.vif [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1295231063',id=18,image_ref='68879bae-06b2-4ebd-9426-376724b14bd8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3AJmOQ2wCUPzcRtEuFARYK45kpEOherY7vePVVnccOEsGkDUrhLVJDLlvSMS1USmAynrCgFqFaEult0hFZjMdEz3wzm3ZwEnOuDtsxD0DU2Udc0rDZR7vpdY5J/AvCgA==',key_name='tempest-keypair-648370203',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-vfz5je1x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1185251615',image_owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:54:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=62d2d76c-ea08-478d-abff-dd6c432e51af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.281 239853 DEBUG nova.network.os_vif_util [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.282 239853 DEBUG nova.network.os_vif_util [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.283 239853 DEBUG nova.objects.instance [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d2d76c-ea08-478d-abff-dd6c432e51af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.295 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <uuid>62d2d76c-ea08-478d-abff-dd6c432e51af</uuid>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <name>instance-00000012</name>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1295231063</nova:name>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:54:43</nova:creationTime>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="68879bae-06b2-4ebd-9426-376724b14bd8"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <nova:port uuid="d6d55a9a-3209-4cd6-8a7b-4f61ea296ece">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="serial">62d2d76c-ea08-478d-abff-dd6c432e51af</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="uuid">62d2d76c-ea08-478d-abff-dd6c432e51af</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-93100b3a-c311-4bac-931d-c3f35ef8736d">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <serial>93100b3a-c311-4bac-931d-c3f35ef8736d</serial>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:f4:f5:62"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <target dev="tapd6d55a9a-32"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/console.log" append="off"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <input type="keyboard" bus="usb"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:54:44 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:54:44 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:54:44 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:54:44 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.296 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Preparing to wait for external event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.296 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.297 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.297 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.297 239853 DEBUG nova.virt.libvirt.vif [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1295231063',id=18,image_ref='68879bae-06b2-4ebd-9426-376724b14bd8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3AJmOQ2wCUPzcRtEuFARYK45kpEOherY7vePVVnccOEsGkDUrhLVJDLlvSMS1USmAynrCgFqFaEult0hFZjMdEz3wzm3ZwEnOuDtsxD0DU2Udc0rDZR7vpdY5J/AvCgA==',key_name='tempest-keypair-648370203',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-vfz5je1x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1185251615',image_owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:54:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=62d2d76c-ea08-478d-abff-dd6c432e51af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.298 239853 DEBUG nova.network.os_vif_util [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.298 239853 DEBUG nova.network.os_vif_util [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.299 239853 DEBUG os_vif [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.299 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.299 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.300 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.303 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.303 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d55a9a-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.304 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6d55a9a-32, col_values=(('external_ids', {'iface-id': 'd6d55a9a-3209-4cd6-8a7b-4f61ea296ece', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:f5:62', 'vm-uuid': '62d2d76c-ea08-478d-abff-dd6c432e51af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.305 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:44 np0005605476 NetworkManager[49022]: <info>  [1770054884.3063] manager: (tapd6d55a9a-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.307 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.311 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.312 239853 INFO os_vif [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32')#033[00m
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.342004432 +0000 UTC m=+0.043163204 container create 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.355 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.356 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.356 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:f4:f5:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.357 239853 INFO nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Using config drive#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.380 239853 DEBUG nova.storage.rbd_utils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:44 np0005605476 systemd[1]: Started libpod-conmon-8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f.scope.
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.32735252 +0000 UTC m=+0.028511312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.434513113 +0000 UTC m=+0.135671915 container init 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.440034028 +0000 UTC m=+0.141192820 container start 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:54:44 np0005605476 mystifying_pike[263000]: 167 167
Feb  2 12:54:44 np0005605476 systemd[1]: libpod-8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f.scope: Deactivated successfully.
Feb  2 12:54:44 np0005605476 conmon[263000]: conmon 8bf5961257dc948b8b36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f.scope/container/memory.events
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.443736172 +0000 UTC m=+0.144894964 container attach 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.444800432 +0000 UTC m=+0.145959234 container died 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:54:44 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f700407356cfdba47ae005fae2c2964b4cea223cf0ac30440c17d918107a82e3-merged.mount: Deactivated successfully.
Feb  2 12:54:44 np0005605476 podman[262964]: 2026-02-02 17:54:44.477181033 +0000 UTC m=+0.178339805 container remove 8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:54:44 np0005605476 systemd[1]: libpod-conmon-8bf5961257dc948b8b36822223b56154936ac83123ecd59163c89e93c4d66e4f.scope: Deactivated successfully.
Feb  2 12:54:44 np0005605476 podman[263024]: 2026-02-02 17:54:44.641185563 +0000 UTC m=+0.044876622 container create 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:54:44 np0005605476 systemd[1]: Started libpod-conmon-7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0.scope.
Feb  2 12:54:44 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:44 np0005605476 podman[263024]: 2026-02-02 17:54:44.620869662 +0000 UTC m=+0.024560731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:44 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:44 np0005605476 podman[263024]: 2026-02-02 17:54:44.740822724 +0000 UTC m=+0.144513783 container init 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:54:44 np0005605476 podman[263024]: 2026-02-02 17:54:44.747564084 +0000 UTC m=+0.151255133 container start 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 12:54:44 np0005605476 podman[263024]: 2026-02-02 17:54:44.751743651 +0000 UTC m=+0.155434730 container attach 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:54:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:54:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:44 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.856 239853 INFO nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Creating config drive at /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.863 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3ejsjoec execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.946 239853 DEBUG nova.network.neutron [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updated VIF entry in instance network info cache for port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.947 239853 DEBUG nova.network.neutron [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updating instance_info_cache with network_info: [{"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.962 239853 DEBUG oslo_concurrency.lockutils [req-5830442c-a7fe-4404-8e58-42f6ded3d8ef req-4e74a008-0030-4d37-a5b3-5bf59d1e8657 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:44 np0005605476 nova_compute[239846]: 2026-02-02 17:54:44.985 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3ejsjoec" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.009 239853 DEBUG nova.storage.rbd_utils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.013 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config 62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.067 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.128 239853 DEBUG oslo_concurrency.processutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config 62d2d76c-ea08-478d-abff-dd6c432e51af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.129 239853 INFO nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Deleting local config drive /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af/disk.config because it was imported into RBD.#033[00m
Feb  2 12:54:45 np0005605476 eloquent_moser[263041]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:54:45 np0005605476 eloquent_moser[263041]: --> All data devices are unavailable
Feb  2 12:54:45 np0005605476 kernel: tapd6d55a9a-32: entered promiscuous mode
Feb  2 12:54:45 np0005605476 NetworkManager[49022]: <info>  [1770054885.1813] manager: (tapd6d55a9a-32): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Feb  2 12:54:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:45Z|00173|binding|INFO|Claiming lport d6d55a9a-3209-4cd6-8a7b-4f61ea296ece for this chassis.
Feb  2 12:54:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:45Z|00174|binding|INFO|d6d55a9a-3209-4cd6-8a7b-4f61ea296ece: Claiming fa:16:3e:f4:f5:62 10.100.0.4
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.182 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.189 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:f5:62 10.100.0.4'], port_security=['fa:16:3e:f4:f5:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '62d2d76c-ea08-478d-abff-dd6c432e51af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a620f03d-b32d-45ef-b068-da1cff51af0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.191 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.193 155391 INFO neutron.agent.ovn.metadata.agent [-] Port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:54:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:45Z|00175|binding|INFO|Setting lport d6d55a9a-3209-4cd6-8a7b-4f61ea296ece ovn-installed in OVS
Feb  2 12:54:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:45Z|00176|binding|INFO|Setting lport d6d55a9a-3209-4cd6-8a7b-4f61ea296ece up in Southbound
Feb  2 12:54:45 np0005605476 systemd[1]: libpod-7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0.scope: Deactivated successfully.
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.195 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.195 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 podman[263024]: 2026-02-02 17:54:45.19632939 +0000 UTC m=+0.600020399 container died 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.211 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fc24ea-b894-4f6c-8d53-7ef78d7d640e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 systemd-machined[208080]: New machine qemu-18-instance-00000012.
Feb  2 12:54:45 np0005605476 systemd-udevd[263114]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:54:45 np0005605476 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Feb  2 12:54:45 np0005605476 NetworkManager[49022]: <info>  [1770054885.2436] device (tapd6d55a9a-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:54:45 np0005605476 NetworkManager[49022]: <info>  [1770054885.2442] device (tapd6d55a9a-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.244 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.255 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa8c550-96df-4daf-9502-9158fa9b7540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.259 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d0845274-8e28-4d48-a734-19f6a5aba0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2f3d2cfa259cebd0b8daf821b73879c6daa90869819820e30cb48585e8ea736b-merged.mount: Deactivated successfully.
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.293 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[efc762d0-891e-458d-a4a5-55aa0db2e5a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.309 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8ee267-9732-409c-a8d8-a3813588d3cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408961, 'reachable_time': 27658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263139, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.325 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa15d965-a5cf-4962-a7f3-79b29eed4b9a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408968, 'tstamp': 408968}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263141, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408970, 'tstamp': 408970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263141, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:54:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.328 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.330 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 podman[263024]: 2026-02-02 17:54:45.331269003 +0000 UTC m=+0.734960022 container remove 7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_moser, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.333 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.333 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.333 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.334 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.334 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:54:45 np0005605476 systemd[1]: libpod-conmon-7168c25fefbf8c9e01aba5709fbc60a1013e13f7419e772c65d4ebb4c997cea0.scope: Deactivated successfully.
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.541 239853 DEBUG nova.compute.manager [req-5b3098e2-22ac-4a73-9ac8-3ebaf7df2cb3 req-865e6db0-373a-46b6-9acf-dbd8fee4887b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.541 239853 DEBUG oslo_concurrency.lockutils [req-5b3098e2-22ac-4a73-9ac8-3ebaf7df2cb3 req-865e6db0-373a-46b6-9acf-dbd8fee4887b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.541 239853 DEBUG oslo_concurrency.lockutils [req-5b3098e2-22ac-4a73-9ac8-3ebaf7df2cb3 req-865e6db0-373a-46b6-9acf-dbd8fee4887b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 12 KiB/s wr, 276 op/s
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.542 239853 DEBUG oslo_concurrency.lockutils [req-5b3098e2-22ac-4a73-9ac8-3ebaf7df2cb3 req-865e6db0-373a-46b6-9acf-dbd8fee4887b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.543 239853 DEBUG nova.compute.manager [req-5b3098e2-22ac-4a73-9ac8-3ebaf7df2cb3 req-865e6db0-373a-46b6-9acf-dbd8fee4887b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Processing event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.659 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.661 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:54:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:45.662 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.779681219 +0000 UTC m=+0.057345123 container create 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:54:45 np0005605476 systemd[1]: Started libpod-conmon-5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66.scope.
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.746252199 +0000 UTC m=+0.023916193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.851 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.853 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054885.8531523, 62d2d76c-ea08-478d-abff-dd6c432e51af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.853 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] VM Started (Lifecycle Event)#033[00m
Feb  2 12:54:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.855 239853 DEBUG nova.virt.libvirt.driver [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.858 239853 INFO nova.virt.libvirt.driver [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Instance spawned successfully.#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.858 239853 INFO nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Took 2.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.858 239853 DEBUG nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.861663743 +0000 UTC m=+0.139327747 container init 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:54:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.874 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.876993575 +0000 UTC m=+0.154657479 container start 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.878 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.881214383 +0000 UTC m=+0.158878307 container attach 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:54:45 np0005605476 systemd[1]: libpod-5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66.scope: Deactivated successfully.
Feb  2 12:54:45 np0005605476 gracious_brown[263263]: 167 167
Feb  2 12:54:45 np0005605476 conmon[263263]: conmon 5d18e575c22dfaff8abf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66.scope/container/memory.events
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.884561557 +0000 UTC m=+0.162225471 container died 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:54:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2a12090c7d489037e894df71dc65d5123881e9864c848adfde0ce9c1592d26ab-merged.mount: Deactivated successfully.
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.906 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.906 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054885.8558292, 62d2d76c-ea08-478d-abff-dd6c432e51af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.906 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:54:45 np0005605476 podman[263223]: 2026-02-02 17:54:45.915421005 +0000 UTC m=+0.193084909 container remove 5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.927 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:45 np0005605476 systemd[1]: libpod-conmon-5d18e575c22dfaff8abf841750d7da743d3e11b7f9a3d8fa9553fb732c8d7a66.scope: Deactivated successfully.
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.931 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054885.8573534, 62d2d76c-ea08-478d-abff-dd6c432e51af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.932 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.940 239853 INFO nova.compute.manager [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Took 6.63 seconds to build instance.#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.959 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.962 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:54:45 np0005605476 nova_compute[239846]: 2026-02-02 17:54:45.973 239853 DEBUG oslo_concurrency.lockutils [None req-817ab655-b7e8-487e-a212-ba5734b14fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.078943242 +0000 UTC m=+0.045039447 container create 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:54:46 np0005605476 systemd[1]: Started libpod-conmon-54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719.scope.
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.055238926 +0000 UTC m=+0.021335191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/383253565' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/383253565' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387c56dd6eabf6d2998213653f4b63f72a5f489062e6bd2e2cb4539c38db8a06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387c56dd6eabf6d2998213653f4b63f72a5f489062e6bd2e2cb4539c38db8a06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387c56dd6eabf6d2998213653f4b63f72a5f489062e6bd2e2cb4539c38db8a06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:46 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387c56dd6eabf6d2998213653f4b63f72a5f489062e6bd2e2cb4539c38db8a06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.174363054 +0000 UTC m=+0.140459329 container init 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.18061388 +0000 UTC m=+0.146710065 container start 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.183873202 +0000 UTC m=+0.149969427 container attach 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:54:46 np0005605476 kind_morse[263303]: {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    "0": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "devices": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "/dev/loop3"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            ],
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_name": "ceph_lv0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_size": "21470642176",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "name": "ceph_lv0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "tags": {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_name": "ceph",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.crush_device_class": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.encrypted": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.objectstore": "bluestore",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_id": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.vdo": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.with_tpm": "0"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            },
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "vg_name": "ceph_vg0"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        }
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    ],
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    "1": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "devices": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "/dev/loop4"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            ],
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_name": "ceph_lv1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_size": "21470642176",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "name": "ceph_lv1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "tags": {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_name": "ceph",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.crush_device_class": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.encrypted": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.objectstore": "bluestore",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_id": "1",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.vdo": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.with_tpm": "0"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            },
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "vg_name": "ceph_vg1"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        }
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    ],
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    "2": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "devices": [
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "/dev/loop5"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            ],
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_name": "ceph_lv2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_size": "21470642176",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "name": "ceph_lv2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "tags": {
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.cluster_name": "ceph",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.crush_device_class": "",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.encrypted": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.objectstore": "bluestore",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osd_id": "2",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.vdo": "0",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:                "ceph.with_tpm": "0"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            },
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "type": "block",
Feb  2 12:54:46 np0005605476 kind_morse[263303]:            "vg_name": "ceph_vg2"
Feb  2 12:54:46 np0005605476 kind_morse[263303]:        }
Feb  2 12:54:46 np0005605476 kind_morse[263303]:    ]
Feb  2 12:54:46 np0005605476 kind_morse[263303]: }
Feb  2 12:54:46 np0005605476 systemd[1]: libpod-54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719.scope: Deactivated successfully.
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.479654417 +0000 UTC m=+0.445750602 container died 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:54:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-387c56dd6eabf6d2998213653f4b63f72a5f489062e6bd2e2cb4539c38db8a06-merged.mount: Deactivated successfully.
Feb  2 12:54:46 np0005605476 podman[263286]: 2026-02-02 17:54:46.521621057 +0000 UTC m=+0.487717242 container remove 54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 12:54:46 np0005605476 systemd[1]: libpod-conmon-54edadf70a45f66feecfdaeafb533d20326f00ea191c775d35787252a2af5719.scope: Deactivated successfully.
Feb  2 12:54:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:46.645 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:46.646 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:46.647 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Feb  2 12:54:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.00049277 +0000 UTC m=+0.053729562 container create 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:54:47 np0005605476 systemd[1]: Started libpod-conmon-73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169.scope.
Feb  2 12:54:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.054404916 +0000 UTC m=+0.107641758 container init 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.061565617 +0000 UTC m=+0.114802449 container start 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:54:47 np0005605476 recursing_wu[263401]: 167 167
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.065189339 +0000 UTC m=+0.118426211 container attach 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:54:47 np0005605476 systemd[1]: libpod-73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169.scope: Deactivated successfully.
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:46.968907852 +0000 UTC m=+0.022144654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:47 np0005605476 conmon[263401]: conmon 73654fc312ec3f21ef98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169.scope/container/memory.events
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.066837945 +0000 UTC m=+0.120074767 container died 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 12:54:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ef04f8209d0ee8a4c5ab23a522c5fbf8382ca5fbe94fec83a078b269735321d5-merged.mount: Deactivated successfully.
Feb  2 12:54:47 np0005605476 podman[263385]: 2026-02-02 17:54:47.106998184 +0000 UTC m=+0.160234956 container remove 73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb  2 12:54:47 np0005605476 systemd[1]: libpod-conmon-73654fc312ec3f21ef98d0ccea9ac62e8f26d0defe903a4b1418824e73801169.scope: Deactivated successfully.
Feb  2 12:54:47 np0005605476 podman[263426]: 2026-02-02 17:54:47.232500822 +0000 UTC m=+0.041237150 container create c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 12:54:47 np0005605476 systemd[1]: Started libpod-conmon-c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac.scope.
Feb  2 12:54:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:54:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a004326a6a611ac991119d24b041e8d5feffabaa6e1cfb659bf902ad6c795946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a004326a6a611ac991119d24b041e8d5feffabaa6e1cfb659bf902ad6c795946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a004326a6a611ac991119d24b041e8d5feffabaa6e1cfb659bf902ad6c795946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a004326a6a611ac991119d24b041e8d5feffabaa6e1cfb659bf902ad6c795946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:54:47 np0005605476 podman[263426]: 2026-02-02 17:54:47.214735353 +0000 UTC m=+0.023471681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:54:47 np0005605476 podman[263426]: 2026-02-02 17:54:47.325919279 +0000 UTC m=+0.134655617 container init c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:54:47 np0005605476 podman[263426]: 2026-02-02 17:54:47.332944706 +0000 UTC m=+0.141681044 container start c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 12:54:47 np0005605476 podman[263426]: 2026-02-02 17:54:47.340839138 +0000 UTC m=+0.149575456 container attach c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.697055090076873e-06 of space, bias 1.0, pg target 0.0029091165270230617 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011209213711161736 of space, bias 1.0, pg target 0.3362764113348521 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4899426029476328e-06 of space, bias 1.0, pg target 0.0007469827808842898 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666691749723285 of space, bias 1.0, pg target 0.2000075249169855 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.910586973710584e-07 of space, bias 4.0, pg target 0.0011892704368452701 quantized to 16 (current 16)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:54:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 11 KiB/s wr, 237 op/s
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.637 239853 DEBUG nova.compute.manager [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.639 239853 DEBUG oslo_concurrency.lockutils [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.639 239853 DEBUG oslo_concurrency.lockutils [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.639 239853 DEBUG oslo_concurrency.lockutils [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.639 239853 DEBUG nova.compute.manager [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] No waiting events found dispatching network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:54:47 np0005605476 nova_compute[239846]: 2026-02-02 17:54:47.640 239853 WARNING nova.compute.manager [req-3964c68a-8a33-4128-8695-ae657f195570 req-1676dbf3-f5a7-4b49-ac79-a10ff6291101 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received unexpected event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece for instance with vm_state active and task_state None.#033[00m
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Feb  2 12:54:48 np0005605476 lvm[263518]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:54:48 np0005605476 lvm[263518]: VG ceph_vg0 finished
Feb  2 12:54:48 np0005605476 lvm[263520]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:54:48 np0005605476 lvm[263520]: VG ceph_vg1 finished
Feb  2 12:54:48 np0005605476 lvm[263522]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:54:48 np0005605476 lvm[263522]: VG ceph_vg2 finished
Feb  2 12:54:48 np0005605476 nice_spence[263443]: {}
Feb  2 12:54:48 np0005605476 systemd[1]: libpod-c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac.scope: Deactivated successfully.
Feb  2 12:54:48 np0005605476 systemd[1]: libpod-c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac.scope: Consumed 1.146s CPU time.
Feb  2 12:54:48 np0005605476 conmon[263443]: conmon c6c630cbbca5d4f89f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac.scope/container/memory.events
Feb  2 12:54:48 np0005605476 podman[263426]: 2026-02-02 17:54:48.218609005 +0000 UTC m=+1.027345363 container died c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Feb  2 12:54:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a004326a6a611ac991119d24b041e8d5feffabaa6e1cfb659bf902ad6c795946-merged.mount: Deactivated successfully.
Feb  2 12:54:48 np0005605476 podman[263426]: 2026-02-02 17:54:48.460598518 +0000 UTC m=+1.269334836 container remove c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_spence, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:54:48 np0005605476 systemd[1]: libpod-conmon-c6c630cbbca5d4f89f4e686dafcee913b61546242f8dd1e873fc8fbf28fa6cac.scope: Deactivated successfully.
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192989155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:54:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192989155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:54:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.339 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 41 KiB/s wr, 380 op/s
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.791 239853 DEBUG nova.compute.manager [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-changed-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.792 239853 DEBUG nova.compute.manager [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Refreshing instance network info cache due to event network-changed-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.792 239853 DEBUG oslo_concurrency.lockutils [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.792 239853 DEBUG oslo_concurrency.lockutils [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:54:49 np0005605476 nova_compute[239846]: 2026-02-02 17:54:49.792 239853 DEBUG nova.network.neutron [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Refreshing network info cache for port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:54:50 np0005605476 nova_compute[239846]: 2026-02-02 17:54:50.068 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Feb  2 12:54:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Feb  2 12:54:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Feb  2 12:54:50 np0005605476 podman[263561]: 2026-02-02 17:54:50.595676991 +0000 UTC m=+0.043034681 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 12:54:51 np0005605476 nova_compute[239846]: 2026-02-02 17:54:51.423 239853 DEBUG nova.network.neutron [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updated VIF entry in instance network info cache for port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:54:51 np0005605476 nova_compute[239846]: 2026-02-02 17:54:51.424 239853 DEBUG nova.network.neutron [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updating instance_info_cache with network_info: [{"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:54:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 33 KiB/s wr, 267 op/s
Feb  2 12:54:51 np0005605476 nova_compute[239846]: 2026-02-02 17:54:51.687 239853 DEBUG oslo_concurrency.lockutils [req-25609cf7-8242-44f5-9a8d-c7bd7db59c66 req-1beac2ba-27c1-4681-ae46-71fc79062239 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-62d2d76c-ea08-478d-abff-dd6c432e51af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:54:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 28 KiB/s wr, 228 op/s
Feb  2 12:54:54 np0005605476 nova_compute[239846]: 2026-02-02 17:54:54.345 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:54 np0005605476 podman[263583]: 2026-02-02 17:54:54.61227541 +0000 UTC m=+0.060954895 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb  2 12:54:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:54:54.663 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:54:55 np0005605476 nova_compute[239846]: 2026-02-02 17:54:55.071 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:54:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Feb  2 12:54:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Feb  2 12:54:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Feb  2 12:54:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 31 KiB/s wr, 216 op/s
Feb  2 12:54:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:54:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2514220917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:54:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:57Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Feb  2 12:54:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:54:57Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f4:f5:62 10.100.0.4
Feb  2 12:54:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Feb  2 12:54:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Feb  2 12:54:57 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Feb  2 12:54:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.2 KiB/s wr, 87 op/s
Feb  2 12:54:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Feb  2 12:54:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Feb  2 12:54:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Feb  2 12:54:59 np0005605476 nova_compute[239846]: 2026-02-02 17:54:59.348 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:54:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Feb  2 12:54:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Feb  2 12:54:59 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Feb  2 12:54:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 171 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 183 KiB/s wr, 152 op/s
Feb  2 12:55:00 np0005605476 nova_compute[239846]: 2026-02-02 17:55:00.072 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Feb  2 12:55:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Feb  2 12:55:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Feb  2 12:55:01 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:01Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Feb  2 12:55:01 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:01Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f4:f5:62 10.100.0.4
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Feb  2 12:55:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 230 op/s
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/637136086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/637136086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:02 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:02Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:f5:62 10.100.0.4
Feb  2 12:55:02 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:02Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:f5:62 10.100.0.4
Feb  2 12:55:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 177 op/s
Feb  2 12:55:04 np0005605476 nova_compute[239846]: 2026-02-02 17:55:04.352 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1114986522' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1114986522' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:05 np0005605476 nova_compute[239846]: 2026-02-02 17:55:05.073 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 521 KiB/s rd, 932 KiB/s wr, 112 op/s
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 719 KiB/s wr, 86 op/s
Feb  2 12:55:09 np0005605476 nova_compute[239846]: 2026-02-02 17:55:09.356 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 349 KiB/s rd, 640 KiB/s wr, 77 op/s
Feb  2 12:55:10 np0005605476 nova_compute[239846]: 2026-02-02 17:55:10.074 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Feb  2 12:55:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Feb  2 12:55:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Feb  2 12:55:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 72 KiB/s wr, 41 op/s
Feb  2 12:55:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 72 KiB/s wr, 41 op/s
Feb  2 12:55:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Feb  2 12:55:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Feb  2 12:55:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Feb  2 12:55:14 np0005605476 nova_compute[239846]: 2026-02-02 17:55:14.359 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Feb  2 12:55:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Feb  2 12:55:14 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Feb  2 12:55:15 np0005605476 nova_compute[239846]: 2026-02-02 17:55:15.076 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 9.8 KiB/s wr, 35 op/s
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685496127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685496127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671918967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671918967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.770030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916770073, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2794, "num_deletes": 521, "total_data_size": 3579428, "memory_usage": 3651680, "flush_reason": "Manual Compaction"}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916781987, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2920846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26585, "largest_seqno": 29378, "table_properties": {"data_size": 2909138, "index_size": 7114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 30063, "raw_average_key_size": 21, "raw_value_size": 2882935, "raw_average_value_size": 2051, "num_data_blocks": 307, "num_entries": 1405, "num_filter_entries": 1405, "num_deletions": 521, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054776, "oldest_key_time": 1770054776, "file_creation_time": 1770054916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 11993 microseconds, and 5088 cpu microseconds.
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.782023) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2920846 bytes OK
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.782039) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.783426) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.783442) EVENT_LOG_v1 {"time_micros": 1770054916783437, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.783462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3566285, prev total WAL file size 3566285, number of live WAL files 2.
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.784369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2852KB)], [59(10MB)]
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916784437, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13664388, "oldest_snapshot_seqno": -1}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5968 keys, 9126456 bytes, temperature: kUnknown
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916826390, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9126456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9082264, "index_size": 28146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 149238, "raw_average_key_size": 25, "raw_value_size": 8970660, "raw_average_value_size": 1503, "num_data_blocks": 1134, "num_entries": 5968, "num_filter_entries": 5968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770054916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.826593) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9126456 bytes
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.827868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 325.2 rd, 217.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.1) OK, records in: 6965, records dropped: 997 output_compression: NoCompression
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.827884) EVENT_LOG_v1 {"time_micros": 1770054916827876, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42012, "compaction_time_cpu_micros": 20288, "output_level": 6, "num_output_files": 1, "total_output_size": 9126456, "num_input_records": 6965, "num_output_records": 5968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916828231, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770054916829176, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.784252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.829212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.829217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.829220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.829223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:55:16.829226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:55:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.2 KiB/s wr, 29 op/s
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034771097' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034771097' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:18 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:18Z|00177|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Feb  2 12:55:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Feb  2 12:55:19 np0005605476 nova_compute[239846]: 2026-02-02 17:55:19.362 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 10 KiB/s wr, 93 op/s
Feb  2 12:55:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Feb  2 12:55:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Feb  2 12:55:19 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Feb  2 12:55:20 np0005605476 nova_compute[239846]: 2026-02-02 17:55:20.128 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Feb  2 12:55:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Feb  2 12:55:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Feb  2 12:55:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398050854' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398050854' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Feb  2 12:55:21 np0005605476 podman[263609]: 2026-02-02 17:55:21.596790332 +0000 UTC m=+0.048395041 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.631 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.631 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.631 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.631 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.632 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.633 239853 INFO nova.compute.manager [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Terminating instance#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.634 239853 DEBUG nova.compute.manager [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:55:21 np0005605476 kernel: tapd6d55a9a-32 (unregistering): left promiscuous mode
Feb  2 12:55:21 np0005605476 NetworkManager[49022]: <info>  [1770054921.6893] device (tapd6d55a9a-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.696 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:21Z|00178|binding|INFO|Releasing lport d6d55a9a-3209-4cd6-8a7b-4f61ea296ece from this chassis (sb_readonly=0)
Feb  2 12:55:21 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:21Z|00179|binding|INFO|Setting lport d6d55a9a-3209-4cd6-8a7b-4f61ea296ece down in Southbound
Feb  2 12:55:21 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:21Z|00180|binding|INFO|Removing iface tapd6d55a9a-32 ovn-installed in OVS
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.698 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.706 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.706 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:f5:62 10.100.0.4'], port_security=['fa:16:3e:f4:f5:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '62d2d76c-ea08-478d-abff-dd6c432e51af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a620f03d-b32d-45ef-b068-da1cff51af0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.708 155391 INFO neutron.agent.ovn.metadata.agent [-] Port d6d55a9a-3209-4cd6-8a7b-4f61ea296ece in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.710 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.724 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f2f739-de93-410b-8479-82a78505a6f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.749 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[be5f6334-5230-4fab-aff8-c713b618042d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.752 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[8964b1a2-7ef1-4019-97e0-7d27a1269cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 12.475s CPU time.
Feb  2 12:55:21 np0005605476 systemd-machined[208080]: Machine qemu-18-instance-00000012 terminated.
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.774 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6f08da-3776-4545-b100-8847aa0ec3b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.787 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c28cb92a-7864-4867-b01c-40ffbde49055]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408961, 'reachable_time': 27658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263642, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.800 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[219d7404-6d85-4e76-ba88-2c197b68cf7e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408968, 'tstamp': 408968}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263643, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408970, 'tstamp': 408970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263643, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.803 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.805 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.811 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.812 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.813 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.814 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:21.814 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.851 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.856 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.870 239853 INFO nova.virt.libvirt.driver [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Instance destroyed successfully.#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.871 239853 DEBUG nova.objects.instance [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid 62d2d76c-ea08-478d-abff-dd6c432e51af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.887 239853 DEBUG nova.virt.libvirt.vif [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1295231063',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1295231063',id=18,image_ref='68879bae-06b2-4ebd-9426-376724b14bd8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3AJmOQ2wCUPzcRtEuFARYK45kpEOherY7vePVVnccOEsGkDUrhLVJDLlvSMS1USmAynrCgFqFaEult0hFZjMdEz3wzm3ZwEnOuDtsxD0DU2Udc0rDZR7vpdY5J/AvCgA==',key_name='tempest-keypair-648370203',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:54:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-vfz5je1x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1185251615',image_owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:54:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=62d2d76c-ea08-478d-abff-dd6c432e51af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.888 239853 DEBUG nova.network.os_vif_util [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "address": "fa:16:3e:f4:f5:62", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6d55a9a-32", "ovs_interfaceid": "d6d55a9a-3209-4cd6-8a7b-4f61ea296ece", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.889 239853 DEBUG nova.network.os_vif_util [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.890 239853 DEBUG os_vif [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.893 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.893 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d55a9a-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.895 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.897 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.901 239853 INFO os_vif [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:f5:62,bridge_name='br-int',has_traffic_filtering=True,id=d6d55a9a-3209-4cd6-8a7b-4f61ea296ece,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6d55a9a-32')#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.978 239853 DEBUG nova.compute.manager [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-unplugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.978 239853 DEBUG oslo_concurrency.lockutils [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.979 239853 DEBUG oslo_concurrency.lockutils [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.979 239853 DEBUG oslo_concurrency.lockutils [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.979 239853 DEBUG nova.compute.manager [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] No waiting events found dispatching network-vif-unplugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:55:21 np0005605476 nova_compute[239846]: 2026-02-02 17:55:21.979 239853 DEBUG nova.compute.manager [req-005a6343-baeb-4835-8fe8-7b8f055ba8a7 req-53ebaf91-f63e-4e69-b2fa-350625f807f4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-unplugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.050 239853 INFO nova.virt.libvirt.driver [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Deleting instance files /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af_del#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.051 239853 INFO nova.virt.libvirt.driver [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Deletion of /var/lib/nova/instances/62d2d76c-ea08-478d-abff-dd6c432e51af_del complete#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.120 239853 INFO nova.compute.manager [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.121 239853 DEBUG oslo.service.loopingcall [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.121 239853 DEBUG nova.compute.manager [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.121 239853 DEBUG nova.network.neutron [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.922 239853 DEBUG nova.network.neutron [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:55:22 np0005605476 nova_compute[239846]: 2026-02-02 17:55:22.936 239853 INFO nova.compute.manager [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Took 0.81 seconds to deallocate network for instance.#033[00m
Feb  2 12:55:23 np0005605476 nova_compute[239846]: 2026-02-02 17:55:23.148 239853 INFO nova.compute.manager [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:55:23 np0005605476 nova_compute[239846]: 2026-02-02 17:55:23.149 239853 DEBUG nova.compute.manager [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Deleting volume: 93100b3a-c311-4bac-931d-c3f35ef8736d _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 12:55:23 np0005605476 nova_compute[239846]: 2026-02-02 17:55:23.457 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:23 np0005605476 nova_compute[239846]: 2026-02-02 17:55:23.457 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:23 np0005605476 nova_compute[239846]: 2026-02-02 17:55:23.537 239853 DEBUG oslo_concurrency.processutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 185 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Feb  2 12:55:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301104736' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301104736' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:55:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567836851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.073 239853 DEBUG nova.compute.manager [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.073 239853 DEBUG oslo_concurrency.lockutils [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.074 239853 DEBUG oslo_concurrency.lockutils [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.074 239853 DEBUG oslo_concurrency.lockutils [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.074 239853 DEBUG nova.compute.manager [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] No waiting events found dispatching network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.074 239853 WARNING nova.compute.manager [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received unexpected event network-vif-plugged-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.075 239853 DEBUG nova.compute.manager [req-2f85e5cf-e5d3-4fdd-b3fc-e8113772e76d req-e8ac259f-023d-4c4d-8704-47fbca0953f1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Received event network-vif-deleted-d6d55a9a-3209-4cd6-8a7b-4f61ea296ece external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.088 239853 DEBUG oslo_concurrency.processutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.094 239853 DEBUG nova.compute.provider_tree [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.146 239853 DEBUG nova.scheduler.client.report [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.166 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.200 239853 INFO nova.scheduler.client.report [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance 62d2d76c-ea08-478d-abff-dd6c432e51af#033[00m
Feb  2 12:55:24 np0005605476 nova_compute[239846]: 2026-02-02 17:55:24.274 239853 DEBUG oslo_concurrency.lockutils [None req-14d60361-fbe3-4011-ae58-31beaf6211d5 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "62d2d76c-ea08-478d-abff-dd6c432e51af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Feb  2 12:55:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Feb  2 12:55:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Feb  2 12:55:25 np0005605476 nova_compute[239846]: 2026-02-02 17:55:25.131 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.0 KiB/s wr, 160 op/s
Feb  2 12:55:25 np0005605476 podman[263696]: 2026-02-02 17:55:25.639801882 +0000 UTC m=+0.082599853 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 12:55:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Feb  2 12:55:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Feb  2 12:55:25 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Feb  2 12:55:26 np0005605476 nova_compute[239846]: 2026-02-02 17:55:26.896 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3229504937' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3229504937' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.7 KiB/s wr, 133 op/s
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.468 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.469 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.469 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.469 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.470 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.471 239853 INFO nova.compute.manager [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Terminating instance#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.472 239853 DEBUG nova.compute.manager [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:55:28 np0005605476 kernel: tap48a7d2ef-41 (unregistering): left promiscuous mode
Feb  2 12:55:28 np0005605476 NetworkManager[49022]: <info>  [1770054928.5190] device (tap48a7d2ef-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.525 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:28Z|00181|binding|INFO|Releasing lport 48a7d2ef-4191-450c-b755-4c5e879a0285 from this chassis (sb_readonly=0)
Feb  2 12:55:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:28Z|00182|binding|INFO|Setting lport 48a7d2ef-4191-450c-b755-4c5e879a0285 down in Southbound
Feb  2 12:55:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:28Z|00183|binding|INFO|Removing iface tap48a7d2ef-41 ovn-installed in OVS
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.528 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.537 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.543 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:83:6f 10.100.0.7'], port_security=['fa:16:3e:14:83:6f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a53bf075-1459-4c3e-a411-2ee0267d280a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3147ee48-b44e-4242-8857-9bd3cf787c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=48a7d2ef-4191-450c-b755-4c5e879a0285) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.544 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 48a7d2ef-4191-450c-b755-4c5e879a0285 in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.545 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac1b83e6-8e85-484a-9623-8960b1107077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.546 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5e64f5-5f00-477e-bcbf-8c2c49a67817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.546 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace which is not needed anymore#033[00m
Feb  2 12:55:28 np0005605476 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb  2 12:55:28 np0005605476 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.068s CPU time.
Feb  2 12:55:28 np0005605476 systemd-machined[208080]: Machine qemu-17-instance-00000011 terminated.
Feb  2 12:55:28 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [NOTICE]   (262368) : haproxy version is 2.8.14-c23fe91
Feb  2 12:55:28 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [NOTICE]   (262368) : path to executable is /usr/sbin/haproxy
Feb  2 12:55:28 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [WARNING]  (262368) : Exiting Master process...
Feb  2 12:55:28 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [ALERT]    (262368) : Current worker (262370) exited with code 143 (Terminated)
Feb  2 12:55:28 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[262364]: [WARNING]  (262368) : All workers exited. Exiting... (0)
Feb  2 12:55:28 np0005605476 systemd[1]: libpod-17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6.scope: Deactivated successfully.
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.707 239853 INFO nova.virt.libvirt.driver [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Instance destroyed successfully.#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.707 239853 DEBUG nova.objects.instance [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid a53bf075-1459-4c3e-a411-2ee0267d280a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:55:28 np0005605476 podman[263747]: 2026-02-02 17:55:28.708382509 +0000 UTC m=+0.089762405 container died 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.740 239853 DEBUG nova.virt.libvirt.vif [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:53:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-213705024',display_name='tempest-TestVolumeBootPattern-volume-backed-server-213705024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-213705024',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCo+bZms1uWgtoO9xtR0soZQK4AH/2rpYkWJVnV3jxr7yl1icgiNFkifyBxQ9TjTMgkW7oRRaJJoS+pLaSs502TgdRV9mj2JCfdTmkSDSILI1onZ3oMMZof3bhJng3arrw==',key_name='tempest-keypair-619338074',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:54:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-z97ghp7q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:54:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7b8ea09739a4455840062f2ad81089a',uuid=a53bf075-1459-4c3e-a411-2ee0267d280a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.740 239853 DEBUG nova.network.os_vif_util [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "48a7d2ef-4191-450c-b755-4c5e879a0285", "address": "fa:16:3e:14:83:6f", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48a7d2ef-41", "ovs_interfaceid": "48a7d2ef-4191-450c-b755-4c5e879a0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.742 239853 DEBUG nova.network.os_vif_util [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.744 239853 DEBUG os_vif [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.746 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.746 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48a7d2ef-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.748 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.752 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.757 239853 INFO os_vif [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:83:6f,bridge_name='br-int',has_traffic_filtering=True,id=48a7d2ef-4191-450c-b755-4c5e879a0285,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48a7d2ef-41')#033[00m
Feb  2 12:55:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6-userdata-shm.mount: Deactivated successfully.
Feb  2 12:55:28 np0005605476 systemd[1]: var-lib-containers-storage-overlay-dcaeb776686caf92ef5b376434afaaae340ac927c211b24b64ff4002ee042e40-merged.mount: Deactivated successfully.
Feb  2 12:55:28 np0005605476 podman[263747]: 2026-02-02 17:55:28.803605016 +0000 UTC m=+0.184984912 container cleanup 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:55:28 np0005605476 systemd[1]: libpod-conmon-17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6.scope: Deactivated successfully.
Feb  2 12:55:28 np0005605476 podman[263807]: 2026-02-02 17:55:28.853470268 +0000 UTC m=+0.035712796 container remove 17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.857 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[47ae5f65-16a5-467d-9e56-a4b458abf7db]: (4, ('Mon Feb  2 05:55:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6)\n17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6\nMon Feb  2 05:55:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6)\n17064a9a52dff099d812c65c03f71b44d6509f9045880a8ff19a508c889028f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.860 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[814e7ba6-bcf4-4077-964a-e6222b6415dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.861 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.862 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 kernel: tapac1b83e6-80: left promiscuous mode
Feb  2 12:55:28 np0005605476 nova_compute[239846]: 2026-02-02 17:55:28.869 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.878 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cecbc2d5-35fa-4506-81b6-2bda434b6ca5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.898 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[90d00ca7-cca4-4795-83a4-7e69b2938e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.899 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[60463c8b-eec8-4f44-8890-6d8b127af0c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.911 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c69ca61-564f-45e2-9ebf-df3d1f360d92]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408957, 'reachable_time': 17169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263822, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.915 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:55:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:28.915 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6d8669-78d6-4c27-8ce8-87b53a3e9516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:28 np0005605476 systemd[1]: run-netns-ovnmeta\x2dac1b83e6\x2d8e85\x2d484a\x2d9623\x2d8960b1107077.mount: Deactivated successfully.
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.160 239853 INFO nova.virt.libvirt.driver [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Deleting instance files /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a_del#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.161 239853 INFO nova.virt.libvirt.driver [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Deletion of /var/lib/nova/instances/a53bf075-1459-4c3e-a411-2ee0267d280a_del complete#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.235 239853 INFO nova.compute.manager [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.235 239853 DEBUG oslo.service.loopingcall [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.236 239853 DEBUG nova.compute.manager [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.236 239853 DEBUG nova.network.neutron [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.309 239853 DEBUG nova.compute.manager [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-unplugged-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.310 239853 DEBUG oslo_concurrency.lockutils [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.310 239853 DEBUG oslo_concurrency.lockutils [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.311 239853 DEBUG oslo_concurrency.lockutils [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.311 239853 DEBUG nova.compute.manager [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] No waiting events found dispatching network-vif-unplugged-48a7d2ef-4191-450c-b755-4c5e879a0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:55:29 np0005605476 nova_compute[239846]: 2026-02-02 17:55:29.311 239853 DEBUG nova.compute.manager [req-c85c72d9-e1f1-4a28-92fc-baebd191a3b3 req-0526035c-ee59-404f-b0d8-bb1b89bde46d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-unplugged-48a7d2ef-4191-450c-b755-4c5e879a0285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:55:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 6.2 KiB/s wr, 150 op/s
Feb  2 12:55:30 np0005605476 nova_compute[239846]: 2026-02-02 17:55:30.134 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Feb  2 12:55:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Feb  2 12:55:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.066 239853 DEBUG nova.network.neutron [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.087 239853 INFO nova.compute.manager [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Took 1.85 seconds to deallocate network for instance.#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.393 239853 INFO nova.compute.manager [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.394 239853 DEBUG nova.compute.manager [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Deleting volume: 940f8ac7-d625-4924-b995-4acd1d4befc1 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.427 239853 DEBUG nova.compute.manager [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.428 239853 DEBUG oslo_concurrency.lockutils [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.428 239853 DEBUG oslo_concurrency.lockutils [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.428 239853 DEBUG oslo_concurrency.lockutils [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.428 239853 DEBUG nova.compute.manager [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] No waiting events found dispatching network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.429 239853 WARNING nova.compute.manager [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received unexpected event network-vif-plugged-48a7d2ef-4191-450c-b755-4c5e879a0285 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.429 239853 DEBUG nova.compute.manager [req-687b48c1-e0c5-4ef0-8198-d000e0c43eb7 req-d5ebceb3-f88c-493a-8bb5-77c041325df2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Received event network-vif-deleted-48a7d2ef-4191-450c-b755-4c5e879a0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 6.1 KiB/s wr, 122 op/s
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.599 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.599 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:31 np0005605476 nova_compute[239846]: 2026-02-02 17:55:31.673 239853 DEBUG oslo_concurrency.processutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1451765093' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1451765093' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:55:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604317879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.227 239853 DEBUG oslo_concurrency.processutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.232 239853 DEBUG nova.compute.provider_tree [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.245 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.259 239853 DEBUG nova.scheduler.client.report [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.288 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.322 239853 INFO nova.scheduler.client.report [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance a53bf075-1459-4c3e-a411-2ee0267d280a#033[00m
Feb  2 12:55:32 np0005605476 nova_compute[239846]: 2026-02-02 17:55:32.395 239853 DEBUG oslo_concurrency.lockutils [None req-80701ff7-0884-4540-aef9-0097b79e2fe9 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "a53bf075-1459-4c3e-a411-2ee0267d280a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510891572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510891572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.8 KiB/s wr, 80 op/s
Feb  2 12:55:33 np0005605476 nova_compute[239846]: 2026-02-02 17:55:33.750 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236340805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:34 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236340805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Feb  2 12:55:35 np0005605476 nova_compute[239846]: 2026-02-02 17:55:35.171 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Feb  2 12:55:35 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Feb  2 12:55:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.7 KiB/s wr, 123 op/s
Feb  2 12:55:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:55:36
Feb  2 12:55:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:55:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:55:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images']
Feb  2 12:55:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:55:36 np0005605476 nova_compute[239846]: 2026-02-02 17:55:36.870 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054921.868746, 62d2d76c-ea08-478d-abff-dd6c432e51af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:55:36 np0005605476 nova_compute[239846]: 2026-02-02 17:55:36.870 239853 INFO nova.compute.manager [-] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:55:36 np0005605476 nova_compute[239846]: 2026-02-02 17:55:36.897 239853 DEBUG nova.compute.manager [None req-60460918-5c4b-4bd2-bf8f-c0162c0a5c0e - - - - - -] [instance: 62d2d76c-ea08-478d-abff-dd6c432e51af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3083480161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3083480161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 5.6 KiB/s wr, 111 op/s
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:55:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.261 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.261 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.262 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.262 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.262 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.753 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:55:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942322816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.773 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.903 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.905 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=59.98780939448625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.905 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:38 np0005605476 nova_compute[239846]: 2026-02-02 17:55:38.906 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Feb  2 12:55:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Feb  2 12:55:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.351 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.352 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.365 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.380 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.381 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.399 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.425 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.440 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 7.6 KiB/s wr, 152 op/s
Feb  2 12:55:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:55:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830957879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.959 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.963 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:55:39 np0005605476 nova_compute[239846]: 2026-02-02 17:55:39.977 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:55:40 np0005605476 nova_compute[239846]: 2026-02-02 17:55:40.000 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:55:40 np0005605476 nova_compute[239846]: 2026-02-02 17:55:40.001 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685050542' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685050542' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:40 np0005605476 nova_compute[239846]: 2026-02-02 17:55:40.172 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253055799' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253055799' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.5 KiB/s wr, 130 op/s
Feb  2 12:55:42 np0005605476 nova_compute[239846]: 2026-02-02 17:55:42.997 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:42 np0005605476 nova_compute[239846]: 2026-02-02 17:55:42.997 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.015 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.015 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.015 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.031 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.032 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346816092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346816092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Feb  2 12:55:43 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Feb  2 12:55:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.5 KiB/s wr, 130 op/s
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.704 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054928.7037156, a53bf075-1459-4c3e-a411-2ee0267d280a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.705 239853 INFO nova.compute.manager [-] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.726 239853 DEBUG nova.compute.manager [None req-7cb93618-3a31-483e-8a4c-8a816f87203a - - - - - -] [instance: a53bf075-1459-4c3e-a411-2ee0267d280a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:43 np0005605476 nova_compute[239846]: 2026-02-02 17:55:43.757 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:44 np0005605476 nova_compute[239846]: 2026-02-02 17:55:44.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:44 np0005605476 nova_compute[239846]: 2026-02-02 17:55:44.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:45 np0005605476 nova_compute[239846]: 2026-02-02 17:55:45.173 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Feb  2 12:55:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Feb  2 12:55:45 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Feb  2 12:55:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 98 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 922 KiB/s wr, 149 op/s
Feb  2 12:55:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:45.727 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:55:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:45.728 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:55:45 np0005605476 nova_compute[239846]: 2026-02-02 17:55:45.729 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:46 np0005605476 nova_compute[239846]: 2026-02-02 17:55:46.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:55:46 np0005605476 nova_compute[239846]: 2026-02-02 17:55:46.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:55:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054038689' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054038689' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:46.646 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:46.647 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:46.647 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.91182669840606e-06 of space, bias 1.0, pg target 0.002373548009521818 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00045202044977405586 of space, bias 1.0, pg target 0.13560613493221677 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.1011015707032013e-06 of space, bias 1.0, pg target 0.0006303304712109604 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665069237036064 of space, bias 1.0, pg target 0.19995207711108193 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.821473042340983e-07 of space, bias 4.0, pg target 0.0011785767650809179 quantized to 16 (current 16)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:55:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 98 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 768 KiB/s wr, 124 op/s
Feb  2 12:55:48 np0005605476 nova_compute[239846]: 2026-02-02 17:55:48.761 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.069 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.071 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.093 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.175 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.176 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.181 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.181 239853 INFO nova.compute.claims [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.297 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.448700159 +0000 UTC m=+0.060034409 container create 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:55:49 np0005605476 systemd[1]: Started libpod-conmon-599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d.scope.
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.424971602 +0000 UTC m=+0.036305882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.531340773 +0000 UTC m=+0.142675043 container init 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.536977221 +0000 UTC m=+0.148311481 container start 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:55:49 np0005605476 serene_goldstine[264070]: 167 167
Feb  2 12:55:49 np0005605476 systemd[1]: libpod-599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d.scope: Deactivated successfully.
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.545192912 +0000 UTC m=+0.156527182 container attach 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.54548946 +0000 UTC m=+0.156823710 container died 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:55:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 134 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Feb  2 12:55:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-12d9b5d5ba70f11bafb1a120e9b505310791c16256fb450495278fa3d79e9602-merged.mount: Deactivated successfully.
Feb  2 12:55:49 np0005605476 podman[264034]: 2026-02-02 17:55:49.611432184 +0000 UTC m=+0.222766434 container remove 599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:55:49 np0005605476 systemd[1]: libpod-conmon-599594c0da5a7337b7b3d88970cfa0c29824f1f3f135d303e9b71c0efe550e1d.scope: Deactivated successfully.
Feb  2 12:55:49 np0005605476 podman[264094]: 2026-02-02 17:55:49.728906667 +0000 UTC m=+0.036173498 container create e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 12:55:49 np0005605476 systemd[1]: Started libpod-conmon-e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420.scope.
Feb  2 12:55:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:49 np0005605476 podman[264094]: 2026-02-02 17:55:49.713796872 +0000 UTC m=+0.021063703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:55:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765680943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:55:49 np0005605476 podman[264094]: 2026-02-02 17:55:49.818205076 +0000 UTC m=+0.125471927 container init e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 12:55:49 np0005605476 podman[264094]: 2026-02-02 17:55:49.82473314 +0000 UTC m=+0.131999971 container start e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 12:55:49 np0005605476 podman[264094]: 2026-02-02 17:55:49.83612603 +0000 UTC m=+0.143392861 container attach e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.836 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.844 239853 DEBUG nova.compute.provider_tree [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.885 239853 DEBUG nova.scheduler.client.report [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.947 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:49 np0005605476 nova_compute[239846]: 2026-02-02 17:55:49.948 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.043 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.044 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.073 239853 INFO nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.100 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.160 239853 INFO nova.virt.block_device [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Booting with volume a9096a6c-bb47-4b06-ade8-691252f8a0da at /dev/vda#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.176 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:50 np0005605476 pedantic_leavitt[264111]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:55:50 np0005605476 pedantic_leavitt[264111]: --> All data devices are unavailable
Feb  2 12:55:50 np0005605476 systemd[1]: libpod-e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420.scope: Deactivated successfully.
Feb  2 12:55:50 np0005605476 podman[264094]: 2026-02-02 17:55:50.273148916 +0000 UTC m=+0.580415737 container died e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:55:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3cca337902504909b6bcbe26bb4d8492043f0d28e4038e4ed8ab80e98c63587a-merged.mount: Deactivated successfully.
Feb  2 12:55:50 np0005605476 podman[264094]: 2026-02-02 17:55:50.325360334 +0000 UTC m=+0.632627165 container remove e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:55:50 np0005605476 systemd[1]: libpod-conmon-e2e241c33f5f1aa64b9f08648f83199bb1f5a07dc24606607da18659b84a1420.scope: Deactivated successfully.
Feb  2 12:55:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Feb  2 12:55:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Feb  2 12:55:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.575 239853 DEBUG nova.policy [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.662 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.68912258 +0000 UTC m=+0.037741262 container create 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.693 239853 DEBUG os_brick.utils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.694 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.705 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.705 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[a9084722-66f6-4f52-9a43-2f905556142e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.707 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.715 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.715 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[362d2f77-390a-4959-a476-8c3d1231028c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.716 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:50 np0005605476 systemd[1]: Started libpod-conmon-342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd.scope.
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.724 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.724 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0a405346-2702-4291-ab10-80c8e65a5e6b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.727 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[88adab2c-33fd-4683-875f-e67cedfaaf66]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.727 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.744 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.746 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.747 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.747 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.747 239853 DEBUG os_brick.utils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (54ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:55:50 np0005605476 nova_compute[239846]: 2026-02-02 17:55:50.748 239853 DEBUG nova.virt.block_device [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updating existing volume attachment record: 91463136-14e3-4942-9502-b89ae927ae01 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.754849178 +0000 UTC m=+0.103467890 container init 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.760429725 +0000 UTC m=+0.109048417 container start 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 12:55:50 np0005605476 silly_jennings[264228]: 167 167
Feb  2 12:55:50 np0005605476 systemd[1]: libpod-342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd.scope: Deactivated successfully.
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.766082744 +0000 UTC m=+0.114701466 container attach 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:55:50 np0005605476 conmon[264228]: conmon 342d0d29eb72b129dbd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd.scope/container/memory.events
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.767165954 +0000 UTC m=+0.115784646 container died 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.675609181 +0000 UTC m=+0.024227913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b7bffeec4e0893cd24118088aa71d75b6e164e1d0c375b689f0a9a8f1d795a56-merged.mount: Deactivated successfully.
Feb  2 12:55:50 np0005605476 podman[264206]: 2026-02-02 17:55:50.817331035 +0000 UTC m=+0.165949727 container remove 342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:55:50 np0005605476 systemd[1]: libpod-conmon-342d0d29eb72b129dbd3b0a19be720187b968ffd07ab87511ee7fa0901a537dd.scope: Deactivated successfully.
Feb  2 12:55:50 np0005605476 podman[264252]: 2026-02-02 17:55:50.961177869 +0000 UTC m=+0.048225627 container create 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:55:51 np0005605476 systemd[1]: Started libpod-conmon-5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f.scope.
Feb  2 12:55:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82077db5ec8672c9dffef1d2afd793b969734a2c7ed473d7ccc784ee308c095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82077db5ec8672c9dffef1d2afd793b969734a2c7ed473d7ccc784ee308c095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82077db5ec8672c9dffef1d2afd793b969734a2c7ed473d7ccc784ee308c095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:51 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82077db5ec8672c9dffef1d2afd793b969734a2c7ed473d7ccc784ee308c095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:50.935514407 +0000 UTC m=+0.022562185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:51.039361937 +0000 UTC m=+0.126409725 container init 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:51.045757497 +0000 UTC m=+0.132805255 container start 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:51.057025073 +0000 UTC m=+0.144072831 container attach 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]: {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    "0": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "devices": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "/dev/loop3"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            ],
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_name": "ceph_lv0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_size": "21470642176",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "name": "ceph_lv0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "tags": {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_name": "ceph",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.crush_device_class": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.encrypted": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.objectstore": "bluestore",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_id": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.vdo": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.with_tpm": "0"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            },
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "vg_name": "ceph_vg0"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        }
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    ],
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    "1": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "devices": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "/dev/loop4"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            ],
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_name": "ceph_lv1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_size": "21470642176",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "name": "ceph_lv1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "tags": {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_name": "ceph",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.crush_device_class": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.encrypted": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.objectstore": "bluestore",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_id": "1",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.vdo": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.with_tpm": "0"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            },
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "vg_name": "ceph_vg1"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        }
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    ],
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    "2": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "devices": [
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "/dev/loop5"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            ],
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_name": "ceph_lv2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_size": "21470642176",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "name": "ceph_lv2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "tags": {
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.cluster_name": "ceph",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.crush_device_class": "",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.encrypted": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.objectstore": "bluestore",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osd_id": "2",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.vdo": "0",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:                "ceph.with_tpm": "0"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            },
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "type": "block",
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:            "vg_name": "ceph_vg2"
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:        }
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]:    ]
Feb  2 12:55:51 np0005605476 peaceful_bhabha[264268]: }
Feb  2 12:55:51 np0005605476 systemd[1]: libpod-5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f.scope: Deactivated successfully.
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:51.340216575 +0000 UTC m=+0.427264363 container died 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:55:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f82077db5ec8672c9dffef1d2afd793b969734a2c7ed473d7ccc784ee308c095-merged.mount: Deactivated successfully.
Feb  2 12:55:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Feb  2 12:55:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Feb  2 12:55:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Feb  2 12:55:51 np0005605476 podman[264252]: 2026-02-02 17:55:51.419981477 +0000 UTC m=+0.507029275 container remove 5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bhabha, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:55:51 np0005605476 systemd[1]: libpod-conmon-5e3df2caf525cb0b0be35cb1ee442a55f070a3e6f2b63d3ca285275dfbe5476f.scope: Deactivated successfully.
Feb  2 12:55:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:55:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762525147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:55:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.7 MiB/s wr, 114 op/s
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.818129311 +0000 UTC m=+0.033316268 container create d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:55:51 np0005605476 systemd[1]: Started libpod-conmon-d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba.scope.
Feb  2 12:55:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.875193625 +0000 UTC m=+0.090380602 container init d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.881204284 +0000 UTC m=+0.096391261 container start d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:55:51 np0005605476 tender_chatterjee[264369]: 167 167
Feb  2 12:55:51 np0005605476 systemd[1]: libpod-d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba.scope: Deactivated successfully.
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.888257862 +0000 UTC m=+0.103444839 container attach d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.888918741 +0000 UTC m=+0.104105708 container died d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.805543487 +0000 UTC m=+0.020730464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.910 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.913 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.914 239853 INFO nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Creating image(s)#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.914 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.914 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Ensure instance console log exists: /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.915 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.915 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:51 np0005605476 nova_compute[239846]: 2026-02-02 17:55:51.915 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6a7599601665d4f55c1a68fabb7e6019e5c837d7aa8a575cdf29f6212756ad63-merged.mount: Deactivated successfully.
Feb  2 12:55:51 np0005605476 podman[264352]: 2026-02-02 17:55:51.945536302 +0000 UTC m=+0.160723259 container remove d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chatterjee, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:55:51 np0005605476 systemd[1]: libpod-conmon-d22fd6df12a4e630db5e08c6006ac23f7012211eebb4343aeed53cad87e9eeba.scope: Deactivated successfully.
Feb  2 12:55:51 np0005605476 podman[264366]: 2026-02-02 17:55:51.959381232 +0000 UTC m=+0.103068449 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.071951776 +0000 UTC m=+0.037416073 container create a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 12:55:52 np0005605476 systemd[1]: Started libpod-conmon-a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b.scope.
Feb  2 12:55:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0eb9a79ffb5347855c9fdf2870af322766ff312625ab3ee49ed06865b5e7ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0eb9a79ffb5347855c9fdf2870af322766ff312625ab3ee49ed06865b5e7ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0eb9a79ffb5347855c9fdf2870af322766ff312625ab3ee49ed06865b5e7ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0eb9a79ffb5347855c9fdf2870af322766ff312625ab3ee49ed06865b5e7ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.056987016 +0000 UTC m=+0.022451323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.160714462 +0000 UTC m=+0.126178779 container init a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.1673954 +0000 UTC m=+0.132859687 container start a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.17380421 +0000 UTC m=+0.139268497 container attach a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:55:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3994367778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3994367778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:52 np0005605476 nova_compute[239846]: 2026-02-02 17:55:52.693 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Successfully created port: 829f5c9b-056c-42da-8802-d98a0542810c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:55:52 np0005605476 lvm[264508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:55:52 np0005605476 lvm[264508]: VG ceph_vg1 finished
Feb  2 12:55:52 np0005605476 lvm[264505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:55:52 np0005605476 lvm[264505]: VG ceph_vg0 finished
Feb  2 12:55:52 np0005605476 lvm[264510]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:55:52 np0005605476 lvm[264510]: VG ceph_vg2 finished
Feb  2 12:55:52 np0005605476 kind_gould[264429]: {}
Feb  2 12:55:52 np0005605476 systemd[1]: libpod-a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b.scope: Deactivated successfully.
Feb  2 12:55:52 np0005605476 systemd[1]: libpod-a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b.scope: Consumed 1.026s CPU time.
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.904250605 +0000 UTC m=+0.869714932 container died a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:55:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd0eb9a79ffb5347855c9fdf2870af322766ff312625ab3ee49ed06865b5e7ff-merged.mount: Deactivated successfully.
Feb  2 12:55:52 np0005605476 podman[264413]: 2026-02-02 17:55:52.972776401 +0000 UTC m=+0.938240728 container remove a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:55:52 np0005605476 systemd[1]: libpod-conmon-a98ec4382c58f52d612ee6c5209189aaa1bfcf9f7a8a5e6e73d7b1e787e6429b.scope: Deactivated successfully.
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:53 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:55:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.7 MiB/s wr, 114 op/s
Feb  2 12:55:53 np0005605476 nova_compute[239846]: 2026-02-02 17:55:53.764 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.307 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Successfully updated port: 829f5c9b-056c-42da-8802-d98a0542810c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.341 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.341 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.341 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.406 239853 DEBUG nova.compute.manager [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-changed-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.407 239853 DEBUG nova.compute.manager [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Refreshing instance network info cache due to event network-changed-829f5c9b-056c-42da-8802-d98a0542810c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.407 239853 DEBUG oslo_concurrency.lockutils [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:55:54 np0005605476 nova_compute[239846]: 2026-02-02 17:55:54.520 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:55:54 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:54.731 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:55 np0005605476 nova_compute[239846]: 2026-02-02 17:55:55.176 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:55:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Feb  2 12:55:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Feb  2 12:55:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Feb  2 12:55:55 np0005605476 nova_compute[239846]: 2026-02-02 17:55:55.542 239853 DEBUG nova.network.neutron [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updating instance_info_cache with network_info: [{"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:55:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.0 KiB/s wr, 68 op/s
Feb  2 12:55:55 np0005605476 nova_compute[239846]: 2026-02-02 17:55:55.997 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.017 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.017 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Instance network_info: |[{"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.018 239853 DEBUG oslo_concurrency.lockutils [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.018 239853 DEBUG nova.network.neutron [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Refreshing network info cache for port 829f5c9b-056c-42da-8802-d98a0542810c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.022 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Start _get_guest_xml network_info=[{"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '91463136-14e3-4942-9502-b89ae927ae01', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a9096a6c-bb47-4b06-ade8-691252f8a0da', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a9096a6c-bb47-4b06-ade8-691252f8a0da', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5ea4616f-2103-405a-985a-e8f8839f1a05', 'attached_at': '', 'detached_at': '', 'volume_id': 'a9096a6c-bb47-4b06-ade8-691252f8a0da', 'serial': 'a9096a6c-bb47-4b06-ade8-691252f8a0da'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.028 239853 WARNING nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.033 239853 DEBUG nova.virt.libvirt.host [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.034 239853 DEBUG nova.virt.libvirt.host [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.037 239853 DEBUG nova.virt.libvirt.host [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.037 239853 DEBUG nova.virt.libvirt.host [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.038 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.038 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.038 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.039 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.039 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.039 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.039 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.040 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.040 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.040 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.040 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.041 239853 DEBUG nova.virt.hardware [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.066 239853 DEBUG nova.storage.rbd_utils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.070 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:56 np0005605476 podman[264590]: 2026-02-02 17:55:56.625739227 +0000 UTC m=+0.071997655 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:55:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:55:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347524877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.697 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.753 239853 DEBUG nova.virt.libvirt.vif [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-153574159',display_name='tempest-TestVolumeBootPattern-server-153574159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-153574159',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-lnhjelc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:55:50Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=5ea4616f-2103-405a-985a-e8f8839f1a05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.753 239853 DEBUG nova.network.os_vif_util [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.754 239853 DEBUG nova.network.os_vif_util [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.756 239853 DEBUG nova.objects.instance [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ea4616f-2103-405a-985a-e8f8839f1a05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.787 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <uuid>5ea4616f-2103-405a-985a-e8f8839f1a05</uuid>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <name>instance-00000013</name>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-server-153574159</nova:name>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:55:56</nova:creationTime>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <nova:port uuid="829f5c9b-056c-42da-8802-d98a0542810c">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="serial">5ea4616f-2103-405a-985a-e8f8839f1a05</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="uuid">5ea4616f-2103-405a-985a-e8f8839f1a05</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-a9096a6c-bb47-4b06-ade8-691252f8a0da">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <serial>a9096a6c-bb47-4b06-ade8-691252f8a0da</serial>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:fc:44:73"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <target dev="tap829f5c9b-05"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/console.log" append="off"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:55:56 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:55:56 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:55:56 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:55:56 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.789 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Preparing to wait for external event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.789 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.789 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.790 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.791 239853 DEBUG nova.virt.libvirt.vif [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-153574159',display_name='tempest-TestVolumeBootPattern-server-153574159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-153574159',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-lnhjelc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:55:50Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=5ea4616f-2103-405a-985a-e8f8839f1a05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.792 239853 DEBUG nova.network.os_vif_util [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.793 239853 DEBUG nova.network.os_vif_util [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.794 239853 DEBUG os_vif [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.795 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.796 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.797 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.801 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.802 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap829f5c9b-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.803 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap829f5c9b-05, col_values=(('external_ids', {'iface-id': '829f5c9b-056c-42da-8802-d98a0542810c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:44:73', 'vm-uuid': '5ea4616f-2103-405a-985a-e8f8839f1a05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.805 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:56 np0005605476 NetworkManager[49022]: <info>  [1770054956.8068] manager: (tap829f5c9b-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.808 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.813 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.814 239853 INFO os_vif [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05')#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.902 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.903 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.903 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:fc:44:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.903 239853 INFO nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Using config drive#033[00m
Feb  2 12:55:56 np0005605476 nova_compute[239846]: 2026-02-02 17:55:56.922 239853 DEBUG nova.storage.rbd_utils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.383 239853 INFO nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Creating config drive at /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.386 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuhmyuc02 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Feb  2 12:55:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Feb  2 12:55:57 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.513 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuhmyuc02" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.552 239853 DEBUG nova.storage.rbd_utils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.557 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config 5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:55:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 62 op/s
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.635 239853 DEBUG nova.network.neutron [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updated VIF entry in instance network info cache for port 829f5c9b-056c-42da-8802-d98a0542810c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.636 239853 DEBUG nova.network.neutron [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updating instance_info_cache with network_info: [{"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.662 239853 DEBUG oslo_concurrency.lockutils [req-4fc2539e-dbdc-422d-aaa5-8c064ab36f5c req-b1db5ef2-4549-4ed1-b002-0af2cbcd53d7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.739 239853 DEBUG oslo_concurrency.processutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config 5ea4616f-2103-405a-985a-e8f8839f1a05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.740 239853 INFO nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Deleting local config drive /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05/disk.config because it was imported into RBD.#033[00m
Feb  2 12:55:57 np0005605476 NetworkManager[49022]: <info>  [1770054957.7858] manager: (tap829f5c9b-05): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Feb  2 12:55:57 np0005605476 kernel: tap829f5c9b-05: entered promiscuous mode
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.787 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:57Z|00184|binding|INFO|Claiming lport 829f5c9b-056c-42da-8802-d98a0542810c for this chassis.
Feb  2 12:55:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:57Z|00185|binding|INFO|829f5c9b-056c-42da-8802-d98a0542810c: Claiming fa:16:3e:fc:44:73 10.100.0.5
Feb  2 12:55:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:57Z|00186|binding|INFO|Setting lport 829f5c9b-056c-42da-8802-d98a0542810c ovn-installed in OVS
Feb  2 12:55:57 np0005605476 nova_compute[239846]: 2026-02-02 17:55:57.796 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:57Z|00187|binding|INFO|Setting lport 829f5c9b-056c-42da-8802-d98a0542810c up in Southbound
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.800 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:44:73 10.100.0.5'], port_security=['fa:16:3e:fc:44:73 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5ea4616f-2103-405a-985a-e8f8839f1a05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=829f5c9b-056c-42da-8802-d98a0542810c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.801 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 829f5c9b-056c-42da-8802-d98a0542810c in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.802 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.812 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1f52f79d-6de4-4d33-96cb-6a5fbddc0f7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.813 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac1b83e6-81 in ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:55:57 np0005605476 systemd-machined[208080]: New machine qemu-19-instance-00000013.
Feb  2 12:55:57 np0005605476 systemd-udevd[264692]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.815 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac1b83e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.815 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bd066999-899a-4adf-baee-8521d4d188cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.816 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0e67fe-dbe1-40bb-a3d4-ad35fed2a0a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 NetworkManager[49022]: <info>  [1770054957.8253] device (tap829f5c9b-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:55:57 np0005605476 NetworkManager[49022]: <info>  [1770054957.8258] device (tap829f5c9b-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:55:57 np0005605476 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.826 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e8f7d2-fe18-4487-aa16-9a56d341bfa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.839 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a702de4a-ffe1-4258-ba73-434e8229e9c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.859 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[50e23015-0a55-43fa-9ef5-708ea9c2b6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.864 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b19ac1ad-df4e-442b-818d-93c4711d3156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 NetworkManager[49022]: <info>  [1770054957.8655] manager: (tapac1b83e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.888 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6d60a86e-e892-4bbd-a8d4-6a17a1edf5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.891 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[56ea5c0b-2026-43bb-a282-13e116c4ef14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 NetworkManager[49022]: <info>  [1770054957.9072] device (tapac1b83e6-80): carrier: link connected
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.912 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[86cf6780-5404-4dfe-9def-236f9ab4b013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.928 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6725e8da-e4bb-4285-b407-f36dcacb5675]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419923, 'reachable_time': 24452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264724, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.943 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cf342ccd-224a-4cf9-92c9-fc7500ba19a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:c725'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419923, 'tstamp': 419923}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264725, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.963 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b82464c6-c8d4-473b-91d2-793359be4cd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419923, 'reachable_time': 24452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264726, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:57.993 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7446ef-e99d-4f06-bec3-9bdad981f2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.044 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3bfb55-94ae-4643-a5a0-700b491b56f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.045 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.045 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.045 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:58 np0005605476 kernel: tapac1b83e6-80: entered promiscuous mode
Feb  2 12:55:58 np0005605476 NetworkManager[49022]: <info>  [1770054958.0485] manager: (tapac1b83e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.048 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.051 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:55:58 np0005605476 ovn_controller[146041]: 2026-02-02T17:55:58Z|00188|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.052 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.053 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.053 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.056 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb0ff3a-5f42-4495-9673-46f8ff83c81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.058 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:55:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:55:58.059 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'env', 'PROCESS_TAG=haproxy-ac1b83e6-8e85-484a-9623-8960b1107077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac1b83e6-8e85-484a-9623-8960b1107077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.061 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.081 239853 DEBUG nova.compute.manager [req-113f91c4-7168-489a-a26e-c457c1336266 req-f2144f2f-870f-43c1-97c3-9f361fd9353a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.082 239853 DEBUG oslo_concurrency.lockutils [req-113f91c4-7168-489a-a26e-c457c1336266 req-f2144f2f-870f-43c1-97c3-9f361fd9353a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.083 239853 DEBUG oslo_concurrency.lockutils [req-113f91c4-7168-489a-a26e-c457c1336266 req-f2144f2f-870f-43c1-97c3-9f361fd9353a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.083 239853 DEBUG oslo_concurrency.lockutils [req-113f91c4-7168-489a-a26e-c457c1336266 req-f2144f2f-870f-43c1-97c3-9f361fd9353a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.083 239853 DEBUG nova.compute.manager [req-113f91c4-7168-489a-a26e-c457c1336266 req-f2144f2f-870f-43c1-97c3-9f361fd9353a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Processing event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.203 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054958.2022595, 5ea4616f-2103-405a-985a-e8f8839f1a05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.204 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] VM Started (Lifecycle Event)#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.207 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.215 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.218 239853 INFO nova.virt.libvirt.driver [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Instance spawned successfully.#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.220 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.245 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.249 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.249 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.250 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.250 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.251 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.251 239853 DEBUG nova.virt.libvirt.driver [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.254 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.315 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.317 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054958.2024035, 5ea4616f-2103-405a-985a-e8f8839f1a05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.317 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.348 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.352 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054958.2131934, 5ea4616f-2103-405a-985a-e8f8839f1a05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.353 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.356 239853 INFO nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Took 6.44 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.357 239853 DEBUG nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.372 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.376 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.415 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:55:58 np0005605476 podman[264800]: 2026-02-02 17:55:58.431672697 +0000 UTC m=+0.058668980 container create f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.444 239853 INFO nova.compute.manager [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Took 9.29 seconds to build instance.#033[00m
Feb  2 12:55:58 np0005605476 nova_compute[239846]: 2026-02-02 17:55:58.464 239853 DEBUG oslo_concurrency.lockutils [None req-2ee553d0-fdd6-409a-8b40-93ab0c1403db d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:55:58 np0005605476 systemd[1]: Started libpod-conmon-f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9.scope.
Feb  2 12:55:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:55:58 np0005605476 podman[264800]: 2026-02-02 17:55:58.397034094 +0000 UTC m=+0.024030397 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:55:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e927aafa333c782be167e43bfa6e08d5153ad3e1e2056953922be579e5d4bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:55:58 np0005605476 podman[264800]: 2026-02-02 17:55:58.506932003 +0000 UTC m=+0.133928316 container init f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:55:58 np0005605476 podman[264800]: 2026-02-02 17:55:58.513685503 +0000 UTC m=+0.140681786 container start f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:55:58 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [NOTICE]   (264819) : New worker (264821) forked
Feb  2 12:55:58 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [NOTICE]   (264819) : Loading success.
Feb  2 12:55:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:55:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892385844' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:55:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:55:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892385844' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:55:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 KiB/s wr, 138 op/s
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.177 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.189 239853 DEBUG nova.compute.manager [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.189 239853 DEBUG oslo_concurrency.lockutils [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.189 239853 DEBUG oslo_concurrency.lockutils [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.189 239853 DEBUG oslo_concurrency.lockutils [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.189 239853 DEBUG nova.compute.manager [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] No waiting events found dispatching network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:56:00 np0005605476 nova_compute[239846]: 2026-02-02 17:56:00.190 239853 WARNING nova.compute.manager [req-3d056cf2-ef61-4a3a-b987-422b72b48ac0 req-0311a07c-89ee-4c34-b1ca-12d0facc04a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received unexpected event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c for instance with vm_state active and task_state None.#033[00m
Feb  2 12:56:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Feb  2 12:56:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Feb  2 12:56:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Feb  2 12:56:01 np0005605476 nova_compute[239846]: 2026-02-02 17:56:01.504 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 27 KiB/s wr, 207 op/s
Feb  2 12:56:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Feb  2 12:56:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Feb  2 12:56:01 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Feb  2 12:56:01 np0005605476 nova_compute[239846]: 2026-02-02 17:56:01.805 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 192 op/s
Feb  2 12:56:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Feb  2 12:56:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Feb  2 12:56:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Feb  2 12:56:03 np0005605476 nova_compute[239846]: 2026-02-02 17:56:03.957 239853 DEBUG nova.compute.manager [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-changed-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:03 np0005605476 nova_compute[239846]: 2026-02-02 17:56:03.958 239853 DEBUG nova.compute.manager [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Refreshing instance network info cache due to event network-changed-829f5c9b-056c-42da-8802-d98a0542810c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:56:03 np0005605476 nova_compute[239846]: 2026-02-02 17:56:03.958 239853 DEBUG oslo_concurrency.lockutils [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:56:03 np0005605476 nova_compute[239846]: 2026-02-02 17:56:03.958 239853 DEBUG oslo_concurrency.lockutils [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:56:03 np0005605476 nova_compute[239846]: 2026-02-02 17:56:03.958 239853 DEBUG nova.network.neutron [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Refreshing network info cache for port 829f5c9b-056c-42da-8802-d98a0542810c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/620948279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/620948279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603845160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:56:04 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603845160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:56:05 np0005605476 nova_compute[239846]: 2026-02-02 17:56:05.179 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Feb  2 12:56:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Feb  2 12:56:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Feb  2 12:56:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 KiB/s wr, 128 op/s
Feb  2 12:56:06 np0005605476 nova_compute[239846]: 2026-02-02 17:56:06.139 239853 DEBUG nova.network.neutron [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updated VIF entry in instance network info cache for port 829f5c9b-056c-42da-8802-d98a0542810c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:56:06 np0005605476 nova_compute[239846]: 2026-02-02 17:56:06.140 239853 DEBUG nova.network.neutron [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updating instance_info_cache with network_info: [{"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:06 np0005605476 nova_compute[239846]: 2026-02-02 17:56:06.141 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:06 np0005605476 nova_compute[239846]: 2026-02-02 17:56:06.158 239853 DEBUG oslo_concurrency.lockutils [req-59696587-ec0e-4358-942c-50fee9f01b43 req-cc3a05d8-0362-4c78-b198-9ded219354a4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-5ea4616f-2103-405a-985a-e8f8839f1a05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:56:06 np0005605476 nova_compute[239846]: 2026-02-02 17:56:06.807 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.8 KiB/s wr, 36 op/s
Feb  2 12:56:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Feb  2 12:56:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Feb  2 12:56:08 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Feb  2 12:56:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 151 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 456 KiB/s rd, 3.4 MiB/s wr, 124 op/s
Feb  2 12:56:10 np0005605476 nova_compute[239846]: 2026-02-02 17:56:10.181 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Feb  2 12:56:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Feb  2 12:56:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Feb  2 12:56:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:10Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:44:73 10.100.0.5
Feb  2 12:56:10 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:10Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:44:73 10.100.0.5
Feb  2 12:56:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 155 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 4.0 MiB/s wr, 131 op/s
Feb  2 12:56:11 np0005605476 nova_compute[239846]: 2026-02-02 17:56:11.811 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 155 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 404 KiB/s rd, 3.1 MiB/s wr, 99 op/s
Feb  2 12:56:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Feb  2 12:56:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Feb  2 12:56:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Feb  2 12:56:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:56:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2272825105' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:56:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:56:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2272825105' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:56:15 np0005605476 nova_compute[239846]: 2026-02-02 17:56:15.182 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 701 KiB/s rd, 3.7 MiB/s wr, 239 op/s
Feb  2 12:56:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:56:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818656236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:56:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:56:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818656236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:56:16 np0005605476 nova_compute[239846]: 2026-02-02 17:56:16.813 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 669 KiB/s wr, 138 op/s
Feb  2 12:56:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 96 KiB/s wr, 97 op/s
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.028 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.186 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Feb  2 12:56:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Feb  2 12:56:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:20Z|00189|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:56:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.527 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.595 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.596 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.596 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.596 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.596 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.597 239853 INFO nova.compute.manager [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Terminating instance#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.598 239853 DEBUG nova.compute.manager [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:56:20 np0005605476 kernel: tap829f5c9b-05 (unregistering): left promiscuous mode
Feb  2 12:56:20 np0005605476 NetworkManager[49022]: <info>  [1770054980.8195] device (tap829f5c9b-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:56:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:20Z|00190|binding|INFO|Releasing lport 829f5c9b-056c-42da-8802-d98a0542810c from this chassis (sb_readonly=0)
Feb  2 12:56:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:20Z|00191|binding|INFO|Setting lport 829f5c9b-056c-42da-8802-d98a0542810c down in Southbound
Feb  2 12:56:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:20Z|00192|binding|INFO|Removing iface tap829f5c9b-05 ovn-installed in OVS
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.825 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.826 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 nova_compute[239846]: 2026-02-02 17:56:20.838 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:20 np0005605476 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb  2 12:56:20 np0005605476 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 12.191s CPU time.
Feb  2 12:56:20 np0005605476 systemd-machined[208080]: Machine qemu-19-instance-00000013 terminated.
Feb  2 12:56:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:20.905 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:44:73 10.100.0.5'], port_security=['fa:16:3e:fc:44:73 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5ea4616f-2103-405a-985a-e8f8839f1a05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=829f5c9b-056c-42da-8802-d98a0542810c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:56:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:20.907 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 829f5c9b-056c-42da-8802-d98a0542810c in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:56:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:20.908 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac1b83e6-8e85-484a-9623-8960b1107077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:56:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:20.909 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8caffab0-1607-4ce4-b024-552139f9cdb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:20.910 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace which is not needed anymore#033[00m
Feb  2 12:56:21 np0005605476 kernel: tap829f5c9b-05: entered promiscuous mode
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [NOTICE]   (264819) : haproxy version is 2.8.14-c23fe91
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [NOTICE]   (264819) : path to executable is /usr/sbin/haproxy
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [WARNING]  (264819) : Exiting Master process...
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [WARNING]  (264819) : Exiting Master process...
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [ALERT]    (264819) : Current worker (264821) exited with code 143 (Terminated)
Feb  2 12:56:21 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[264815]: [WARNING]  (264819) : All workers exited. Exiting... (0)
Feb  2 12:56:21 np0005605476 kernel: tap829f5c9b-05 (unregistering): left promiscuous mode
Feb  2 12:56:21 np0005605476 NetworkManager[49022]: <info>  [1770054981.0246] manager: (tap829f5c9b-05): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb  2 12:56:21 np0005605476 systemd[1]: libpod-f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9.scope: Deactivated successfully.
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.027 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:21 np0005605476 podman[264855]: 2026-02-02 17:56:21.030014293 +0000 UTC m=+0.051551811 container died f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.038 239853 INFO nova.virt.libvirt.driver [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Instance destroyed successfully.#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.039 239853 DEBUG nova.objects.instance [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid 5ea4616f-2103-405a-985a-e8f8839f1a05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.052 239853 DEBUG nova.virt.libvirt.vif [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-153574159',display_name='tempest-TestVolumeBootPattern-server-153574159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-153574159',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:55:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-lnhjelc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:55:58Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=5ea4616f-2103-405a-985a-e8f8839f1a05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.053 239853 DEBUG nova.network.os_vif_util [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "829f5c9b-056c-42da-8802-d98a0542810c", "address": "fa:16:3e:fc:44:73", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap829f5c9b-05", "ovs_interfaceid": "829f5c9b-056c-42da-8802-d98a0542810c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.053 239853 DEBUG nova.network.os_vif_util [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.054 239853 DEBUG os_vif [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.055 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.056 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap829f5c9b-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.059 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.061 239853 INFO os_vif [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:44:73,bridge_name='br-int',has_traffic_filtering=True,id=829f5c9b-056c-42da-8802-d98a0542810c,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap829f5c9b-05')#033[00m
Feb  2 12:56:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9-userdata-shm.mount: Deactivated successfully.
Feb  2 12:56:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-92e927aafa333c782be167e43bfa6e08d5153ad3e1e2056953922be579e5d4bc-merged.mount: Deactivated successfully.
Feb  2 12:56:21 np0005605476 podman[264855]: 2026-02-02 17:56:21.191945935 +0000 UTC m=+0.213483423 container cleanup f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:56:21 np0005605476 systemd[1]: libpod-conmon-f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9.scope: Deactivated successfully.
Feb  2 12:56:21 np0005605476 podman[264913]: 2026-02-02 17:56:21.333866655 +0000 UTC m=+0.122740792 container remove f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.347 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d21d7e00-1a3d-4f68-acee-6ae9dfa6fb27]: (4, ('Mon Feb  2 05:56:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9)\nf3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9\nMon Feb  2 05:56:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (f3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9)\nf3ae3cf3b403cf9e375ceb68a21b8f284aec7914eb66989fd551c30b1c1c50d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.349 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[38d16d13-2e62-403b-861f-b3daf922962a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.350 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:21 np0005605476 kernel: tapac1b83e6-80: left promiscuous mode
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.353 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.357 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.360 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dd6961-eaec-4e14-a5fa-cced7ac76b36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.375 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cca51266-6cc7-4892-ae1d-fa97908897d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.377 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ba374e8c-105c-4320-95f5-bd3ab3482e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.391 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[78abe1c5-f451-4c8d-a6e8-3e5ce289152e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419918, 'reachable_time': 20493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264928, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 systemd[1]: run-netns-ovnmeta\x2dac1b83e6\x2d8e85\x2d484a\x2d9623\x2d8960b1107077.mount: Deactivated successfully.
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.394 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:56:21 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:21.394 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[02420da1-f11d-4518-8251-9ac4cd164794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 136 KiB/s wr, 129 op/s
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.913 239853 INFO nova.virt.libvirt.driver [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Deleting instance files /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05_del#033[00m
Feb  2 12:56:21 np0005605476 nova_compute[239846]: 2026-02-02 17:56:21.914 239853 INFO nova.virt.libvirt.driver [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Deletion of /var/lib/nova/instances/5ea4616f-2103-405a-985a-e8f8839f1a05_del complete#033[00m
Feb  2 12:56:22 np0005605476 nova_compute[239846]: 2026-02-02 17:56:22.043 239853 INFO nova.compute.manager [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:56:22 np0005605476 nova_compute[239846]: 2026-02-02 17:56:22.044 239853 DEBUG oslo.service.loopingcall [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:56:22 np0005605476 nova_compute[239846]: 2026-02-02 17:56:22.045 239853 DEBUG nova.compute.manager [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:56:22 np0005605476 nova_compute[239846]: 2026-02-02 17:56:22.045 239853 DEBUG nova.network.neutron [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:56:22 np0005605476 podman[264930]: 2026-02-02 17:56:22.602789847 +0000 UTC m=+0.048177565 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:56:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 112 KiB/s wr, 106 op/s
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.763 239853 DEBUG nova.compute.manager [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-unplugged-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.764 239853 DEBUG oslo_concurrency.lockutils [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.764 239853 DEBUG oslo_concurrency.lockutils [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.765 239853 DEBUG oslo_concurrency.lockutils [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.765 239853 DEBUG nova.compute.manager [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] No waiting events found dispatching network-vif-unplugged-829f5c9b-056c-42da-8802-d98a0542810c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:56:23 np0005605476 nova_compute[239846]: 2026-02-02 17:56:23.765 239853 DEBUG nova.compute.manager [req-7d08fd15-4b2f-4735-a82d-e37afcfb8306 req-dd6abab4-8e6b-49a3-8697-200e65b875e0 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-unplugged-829f5c9b-056c-42da-8802-d98a0542810c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:56:24 np0005605476 nova_compute[239846]: 2026-02-02 17:56:24.586 239853 DEBUG nova.network.neutron [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:24 np0005605476 nova_compute[239846]: 2026-02-02 17:56:24.634 239853 INFO nova.compute.manager [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Took 2.59 seconds to deallocate network for instance.#033[00m
Feb  2 12:56:24 np0005605476 nova_compute[239846]: 2026-02-02 17:56:24.869 239853 INFO nova.compute.manager [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:56:24 np0005605476 nova_compute[239846]: 2026-02-02 17:56:24.993 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:24 np0005605476 nova_compute[239846]: 2026-02-02 17:56:24.994 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.042 239853 DEBUG oslo_concurrency.processutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.145 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.187 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:56:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730762125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:56:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 37 KiB/s wr, 37 op/s
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.585 239853 DEBUG oslo_concurrency.processutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.590 239853 DEBUG nova.compute.provider_tree [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.623 239853 DEBUG nova.scheduler.client.report [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.700 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.759 239853 INFO nova.scheduler.client.report [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance 5ea4616f-2103-405a-985a-e8f8839f1a05#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.876 239853 DEBUG nova.compute.manager [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.877 239853 DEBUG oslo_concurrency.lockutils [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.877 239853 DEBUG oslo_concurrency.lockutils [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.878 239853 DEBUG oslo_concurrency.lockutils [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.878 239853 DEBUG nova.compute.manager [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] No waiting events found dispatching network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.878 239853 WARNING nova.compute.manager [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received unexpected event network-vif-plugged-829f5c9b-056c-42da-8802-d98a0542810c for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.879 239853 DEBUG nova.compute.manager [req-baaea248-b3ca-4aff-b493-880f5d946edf req-1d8c2b8f-ee30-44a0-909b-10810efc70a8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Received event network-vif-deleted-829f5c9b-056c-42da-8802-d98a0542810c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:25 np0005605476 nova_compute[239846]: 2026-02-02 17:56:25.932 239853 DEBUG oslo_concurrency.lockutils [None req-195c592c-b388-4c4e-b9d3-ccc28e51acc8 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "5ea4616f-2103-405a-985a-e8f8839f1a05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:26 np0005605476 nova_compute[239846]: 2026-02-02 17:56:26.059 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:26 np0005605476 nova_compute[239846]: 2026-02-02 17:56:26.898 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:26 np0005605476 nova_compute[239846]: 2026-02-02 17:56:26.975 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 37 KiB/s wr, 37 op/s
Feb  2 12:56:27 np0005605476 podman[264972]: 2026-02-02 17:56:27.619753629 +0000 UTC m=+0.069716681 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  2 12:56:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 35 KiB/s wr, 33 op/s
Feb  2 12:56:30 np0005605476 nova_compute[239846]: 2026-02-02 17:56:30.189 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.062 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.073 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.073 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.091 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.196 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.197 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.203 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.204 239853 INFO nova.compute.claims [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.317 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 32 KiB/s wr, 30 op/s
Feb  2 12:56:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:56:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828253155' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.815 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.819 239853 DEBUG nova.compute.provider_tree [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.838 239853 DEBUG nova.scheduler.client.report [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.868 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.869 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.926 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.927 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.944 239853 INFO nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:56:31 np0005605476 nova_compute[239846]: 2026-02-02 17:56:31.964 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.003 239853 INFO nova.virt.block_device [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Booting with volume a9096a6c-bb47-4b06-ade8-691252f8a0da at /dev/vda#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.191 239853 DEBUG nova.policy [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.407 239853 DEBUG os_brick.utils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.408 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.417 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.417 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[8f842c7b-6c66-48f0-8190-997e890bb324]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.418 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.424 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.424 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[2e805400-8239-4c63-94a8-dc806e6f168d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.425 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.431 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.431 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd917a8-f78c-4f36-bd40-45c1f755acc0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.432 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[db966551-0b6b-4719-8cde-5ed6f1addf61]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.433 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.450 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.452 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.452 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.452 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.452 239853 DEBUG os_brick.utils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:56:32 np0005605476 nova_compute[239846]: 2026-02-02 17:56:32.453 239853 DEBUG nova.virt.block_device [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating existing volume attachment record: 5c3c3eee-ad1a-437c-be1f-a49154ed1ba6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:56:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:56:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3852246801' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:56:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 15 op/s
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.712 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Successfully created port: 41e29f7d-c6b6-4096-beb4-01675925dfbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.716 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.717 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.717 239853 INFO nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Creating image(s)#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.718 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.718 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Ensure instance console log exists: /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.718 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.718 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:33 np0005605476 nova_compute[239846]: 2026-02-02 17:56:33.718 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.679 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Successfully updated port: 41e29f7d-c6b6-4096-beb4-01675925dfbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.713 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.714 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.714 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.898 239853 DEBUG nova.compute.manager [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.899 239853 DEBUG nova.compute.manager [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing instance network info cache due to event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.899 239853 DEBUG oslo_concurrency.lockutils [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:56:34 np0005605476 nova_compute[239846]: 2026-02-02 17:56:34.961 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:56:35 np0005605476 nova_compute[239846]: 2026-02-02 17:56:35.192 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 15 op/s
Feb  2 12:56:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.038 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770054981.0369341, 5ea4616f-2103-405a-985a-e8f8839f1a05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.038 239853 INFO nova.compute.manager [-] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.059 239853 DEBUG nova.compute.manager [None req-53c6848b-3be2-4664-982d-22b0157136da - - - - - -] [instance: 5ea4616f-2103-405a-985a-e8f8839f1a05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.063 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.280 239853 DEBUG nova.network.neutron [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.551 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.551 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Instance network_info: |[{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.551 239853 DEBUG oslo_concurrency.lockutils [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.552 239853 DEBUG nova.network.neutron [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.555 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Start _get_guest_xml network_info=[{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '5c3c3eee-ad1a-437c-be1f-a49154ed1ba6', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a9096a6c-bb47-4b06-ade8-691252f8a0da', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a9096a6c-bb47-4b06-ade8-691252f8a0da', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '918af4a9-09ac-4a18-b2bd-f7ea2c0e7452', 'attached_at': '', 'detached_at': '', 'volume_id': 'a9096a6c-bb47-4b06-ade8-691252f8a0da', 'serial': 'a9096a6c-bb47-4b06-ade8-691252f8a0da'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.559 239853 WARNING nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.563 239853 DEBUG nova.virt.libvirt.host [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.564 239853 DEBUG nova.virt.libvirt.host [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.567 239853 DEBUG nova.virt.libvirt.host [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.567 239853 DEBUG nova.virt.libvirt.host [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.567 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.568 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.568 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.569 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.569 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.569 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.569 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.570 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.570 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.570 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.570 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.571 239853 DEBUG nova.virt.hardware [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.592 239853 DEBUG nova.storage.rbd_utils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:56:36 np0005605476 nova_compute[239846]: 2026-02-02 17:56:36.596 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:56:36
Feb  2 12:56:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:56:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:56:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'vms', 'default.rgw.meta']
Feb  2 12:56:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:56:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:56:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337538227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.095 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.117 239853 DEBUG nova.virt.libvirt.vif [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2926141',display_name='tempest-TestVolumeBootPattern-server-2926141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2926141',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-85ag0g5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:56:31Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=918af4a9-09ac-4a18-b2bd-f7ea2c0e7452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.117 239853 DEBUG nova.network.os_vif_util [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.118 239853 DEBUG nova.network.os_vif_util [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.118 239853 DEBUG nova.objects.instance [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.133 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <uuid>918af4a9-09ac-4a18-b2bd-f7ea2c0e7452</uuid>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <name>instance-00000014</name>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-server-2926141</nova:name>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:56:36</nova:creationTime>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <nova:port uuid="41e29f7d-c6b6-4096-beb4-01675925dfbb">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="serial">918af4a9-09ac-4a18-b2bd-f7ea2c0e7452</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="uuid">918af4a9-09ac-4a18-b2bd-f7ea2c0e7452</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-a9096a6c-bb47-4b06-ade8-691252f8a0da">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <serial>a9096a6c-bb47-4b06-ade8-691252f8a0da</serial>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:f7:77:e4"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <target dev="tap41e29f7d-c6"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/console.log" append="off"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:56:37 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:56:37 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:56:37 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:56:37 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.134 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Preparing to wait for external event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.134 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.134 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.134 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.135 239853 DEBUG nova.virt.libvirt.vif [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2926141',display_name='tempest-TestVolumeBootPattern-server-2926141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2926141',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-85ag0g5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:56:31Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=918af4a9-09ac-4a18-b2bd-f7ea2c0e7452,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.135 239853 DEBUG nova.network.os_vif_util [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.135 239853 DEBUG nova.network.os_vif_util [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.136 239853 DEBUG os_vif [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.136 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.137 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.137 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.139 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.139 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41e29f7d-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.139 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41e29f7d-c6, col_values=(('external_ids', {'iface-id': '41e29f7d-c6b6-4096-beb4-01675925dfbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:77:e4', 'vm-uuid': '918af4a9-09ac-4a18-b2bd-f7ea2c0e7452'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.140 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:37 np0005605476 NetworkManager[49022]: <info>  [1770054997.1418] manager: (tap41e29f7d-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.143 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.144 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.145 239853 INFO os_vif [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6')#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.196 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.197 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.197 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:f7:77:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.198 239853 INFO nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Using config drive#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.220 239853 DEBUG nova.storage.rbd_utils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:56:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.862 239853 INFO nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Creating config drive at /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.865 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdz3h5kxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:37 np0005605476 nova_compute[239846]: 2026-02-02 17:56:37.990 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdz3h5kxx" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.027 239853 DEBUG nova.storage.rbd_utils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.032 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.104 239853 DEBUG nova.network.neutron [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updated VIF entry in instance network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.105 239853 DEBUG nova.network.neutron [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.121 239853 DEBUG oslo_concurrency.lockutils [req-31353197-2617-4284-8f92-aa2c4f3ec95e req-46f0b3d6-007b-4a25-8847-bb0066161d63 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.151 239853 DEBUG oslo_concurrency.processutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.151 239853 INFO nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Deleting local config drive /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452/disk.config because it was imported into RBD.#033[00m
Feb  2 12:56:38 np0005605476 kernel: tap41e29f7d-c6: entered promiscuous mode
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.1882] manager: (tap41e29f7d-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Feb  2 12:56:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:38Z|00193|binding|INFO|Claiming lport 41e29f7d-c6b6-4096-beb4-01675925dfbb for this chassis.
Feb  2 12:56:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:38Z|00194|binding|INFO|41e29f7d-c6b6-4096-beb4-01675925dfbb: Claiming fa:16:3e:f7:77:e4 10.100.0.7
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.190 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.192 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.205 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:77:e4 10.100.0.7'], port_security=['fa:16:3e:f7:77:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '918af4a9-09ac-4a18-b2bd-f7ea2c0e7452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=41e29f7d-c6b6-4096-beb4-01675925dfbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.206 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 41e29f7d-c6b6-4096-beb4-01675925dfbb in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.208 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:56:38 np0005605476 systemd-machined[208080]: New machine qemu-20-instance-00000014.
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.214 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7cba7238-b89f-4e41-9282-b3f6b90ce2a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.215 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac1b83e6-81 in ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.218 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac1b83e6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.218 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb840de-d4c0-4ed3-87f3-347cfc4bbd7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.218 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[796b5195-15dd-40b4-979a-da144cdbb3df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Feb  2 12:56:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:38Z|00195|binding|INFO|Setting lport 41e29f7d-c6b6-4096-beb4-01675925dfbb ovn-installed in OVS
Feb  2 12:56:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:38Z|00196|binding|INFO|Setting lport 41e29f7d-c6b6-4096-beb4-01675925dfbb up in Southbound
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.226 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[dde82d82-e330-4392-be40-5f05ec1dd136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.227 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 systemd-udevd[265143]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.2392] device (tap41e29f7d-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.239 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f97419bc-4563-4bf3-ace2-9804339c61c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.2408] device (tap41e29f7d-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.259 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef73554-a6c9-4793-a475-99f572233027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.262 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2cf1d9-d0cb-4134-bc17-63c92b6dcd1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.2646] manager: (tapac1b83e6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.287 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f29f2fa4-3612-4edd-a20f-8a5fb4e91f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.290 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f40a821d-1e4b-4a0a-84a1-ef443a345de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.3048] device (tapac1b83e6-80): carrier: link connected
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.309 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7e3f54-536c-438a-aafd-5067738b05f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.323 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab83d1dc-085a-44bb-bf57-4431eebd502d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423963, 'reachable_time': 31310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265174, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.334 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[818b0af2-e9eb-4139-b24e-6b6896bb0f2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:c725'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423963, 'tstamp': 423963}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265175, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.347 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[63685e0d-e0c9-4e0b-9cd4-b4a964c02215]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423963, 'reachable_time': 31310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265176, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.367 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe07d39-d303-4c99-8fc0-0c8c57685900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.411 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e6c16b-7644-4c07-b42a-e6252683d351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.412 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.412 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.413 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:38 np0005605476 NetworkManager[49022]: <info>  [1770054998.4158] manager: (tapac1b83e6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Feb  2 12:56:38 np0005605476 kernel: tapac1b83e6-80: entered promiscuous mode
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.417 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.418 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.419 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:38Z|00197|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.421 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.422 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[912dc51d-90ef-4e56-af83-64a2fa05604e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.423 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/ac1b83e6-8e85-484a-9623-8960b1107077.pid.haproxy
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID ac1b83e6-8e85-484a-9623-8960b1107077
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:56:38 np0005605476 nova_compute[239846]: 2026-02-02 17:56:38.425 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:38.425 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'env', 'PROCESS_TAG=haproxy-ac1b83e6-8e85-484a-9623-8960b1107077', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac1b83e6-8e85-484a-9623-8960b1107077.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:56:38 np0005605476 podman[265209]: 2026-02-02 17:56:38.817712315 +0000 UTC m=+0.086894509 container create d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:56:38 np0005605476 podman[265209]: 2026-02-02 17:56:38.75362769 +0000 UTC m=+0.022809904 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:56:38 np0005605476 systemd[1]: Started libpod-conmon-d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66.scope.
Feb  2 12:56:38 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:38 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66738f997b7f688c12dcc960f89abfddde894243b4e42345235bf9de6fb520b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:38 np0005605476 podman[265209]: 2026-02-02 17:56:38.898668265 +0000 UTC m=+0.167850489 container init d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:56:38 np0005605476 podman[265209]: 2026-02-02 17:56:38.902545614 +0000 UTC m=+0.171727808 container start d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:56:38 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [NOTICE]   (265228) : New worker (265230) forked
Feb  2 12:56:38 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [NOTICE]   (265228) : Loading success.
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.084 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054999.0842624, 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.085 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] VM Started (Lifecycle Event)#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.101 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.104 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770054999.0847068, 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.104 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.118 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.121 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.138 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.286 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.287 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.287 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.288 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.288 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 12 KiB/s wr, 9 op/s
Feb  2 12:56:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:56:39 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2282210096' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.788 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.846 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.846 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.973 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.974 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4285MB free_disk=59.98779747262597GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.974 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:39 np0005605476 nova_compute[239846]: 2026-02-02 17:56:39.974 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.050 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.050 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.051 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.080 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.194 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.358 239853 DEBUG nova.compute.manager [req-e7a06567-1c6a-4d70-8683-ed90d4cbe7b4 req-28f17860-ffc0-48f6-a1ae-5db265d37b3c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.358 239853 DEBUG oslo_concurrency.lockutils [req-e7a06567-1c6a-4d70-8683-ed90d4cbe7b4 req-28f17860-ffc0-48f6-a1ae-5db265d37b3c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.359 239853 DEBUG oslo_concurrency.lockutils [req-e7a06567-1c6a-4d70-8683-ed90d4cbe7b4 req-28f17860-ffc0-48f6-a1ae-5db265d37b3c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.359 239853 DEBUG oslo_concurrency.lockutils [req-e7a06567-1c6a-4d70-8683-ed90d4cbe7b4 req-28f17860-ffc0-48f6-a1ae-5db265d37b3c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.360 239853 DEBUG nova.compute.manager [req-e7a06567-1c6a-4d70-8683-ed90d4cbe7b4 req-28f17860-ffc0-48f6-a1ae-5db265d37b3c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Processing event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.361 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.381 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.381 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055000.38069, 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.382 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.387 239853 INFO nova.virt.libvirt.driver [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Instance spawned successfully.#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.388 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.418 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.424 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.425 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.425 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.426 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.427 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.428 239853 DEBUG nova.virt.libvirt.driver [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.434 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.483 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.507 239853 INFO nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Took 6.79 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.508 239853 DEBUG nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.571 239853 INFO nova.compute.manager [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Took 9.42 seconds to build instance.#033[00m
Feb  2 12:56:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:56:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491642386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.585 239853 DEBUG oslo_concurrency.lockutils [None req-f6c2edee-77c3-41a2-8622-e70916abe908 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.599 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.603 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:56:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.619 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.637 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:56:40 np0005605476 nova_compute[239846]: 2026-02-02 17:56:40.638 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.141 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.594 239853 DEBUG nova.compute.manager [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.594 239853 DEBUG oslo_concurrency.lockutils [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.594 239853 DEBUG oslo_concurrency.lockutils [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.595 239853 DEBUG oslo_concurrency.lockutils [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.595 239853 DEBUG nova.compute.manager [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] No waiting events found dispatching network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.595 239853 WARNING nova.compute.manager [req-d2fb110c-e5ac-4599-a234-207ed82d3f96 req-5de5c4ba-d853-4b95-a0ee-380a48b3911e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received unexpected event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb for instance with vm_state active and task_state None.#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.634 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.634 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.634 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:56:42 np0005605476 nova_compute[239846]: 2026-02-02 17:56:42.635 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:56:43 np0005605476 nova_compute[239846]: 2026-02-02 17:56:43.452 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:56:43 np0005605476 nova_compute[239846]: 2026-02-02 17:56:43.452 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:56:43 np0005605476 nova_compute[239846]: 2026-02-02 17:56:43.453 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:56:43 np0005605476 nova_compute[239846]: 2026-02-02 17:56:43.453 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:56:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Feb  2 12:56:45 np0005605476 nova_compute[239846]: 2026-02-02 17:56:45.196 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 58 op/s
Feb  2 12:56:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:46.648 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:56:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:46.649 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:56:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:46.649 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:56:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:47.101 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:56:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:47.102 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.101 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.143 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.506 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.0495540755268744e-05 of space, bias 1.0, pg target 0.003148662226580623 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011226778936438895 of space, bias 1.0, pg target 0.33680336809316685 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.036719084049937e-06 of space, bias 1.0, pg target 0.0006110157252149811 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665089109132258 of space, bias 1.0, pg target 0.19995267327396773 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.821473042340983e-07 of space, bias 4.0, pg target 0.0011785767650809179 quantized to 16 (current 16)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:56:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.614 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.614 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.615 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.615 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:47 np0005605476 nova_compute[239846]: 2026-02-02 17:56:47.615 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.526 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:48 np0005605476 NetworkManager[49022]: <info>  [1770055008.5387] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb  2 12:56:48 np0005605476 NetworkManager[49022]: <info>  [1770055008.5395] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.597 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:48 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:48Z|00198|binding|INFO|Releasing lport 25290ff2-fb45-4116-8eb3-96ed5f17945e from this chassis (sb_readonly=0)
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.622 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.932 239853 DEBUG nova.compute.manager [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.933 239853 DEBUG nova.compute.manager [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing instance network info cache due to event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.933 239853 DEBUG oslo_concurrency.lockutils [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.934 239853 DEBUG oslo_concurrency.lockutils [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:56:48 np0005605476 nova_compute[239846]: 2026-02-02 17:56:48.935 239853 DEBUG nova.network.neutron [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:56:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 12:56:50 np0005605476 nova_compute[239846]: 2026-02-02 17:56:50.197 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:50 np0005605476 nova_compute[239846]: 2026-02-02 17:56:50.640 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:51 np0005605476 nova_compute[239846]: 2026-02-02 17:56:51.028 239853 DEBUG nova.network.neutron [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updated VIF entry in instance network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:56:51 np0005605476 nova_compute[239846]: 2026-02-02 17:56:51.028 239853 DEBUG nova.network.neutron [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:56:51 np0005605476 nova_compute[239846]: 2026-02-02 17:56:51.044 239853 DEBUG oslo_concurrency.lockutils [req-ce061d28-f486-40f8-b412-7dc99c1b2caf req-56e431fc-cbaf-45f9-ae91-7e2ced9a303e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:56:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Feb  2 12:56:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:51Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.7
Feb  2 12:56:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:51Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f7:77:e4 10.100.0.7
Feb  2 12:56:52 np0005605476 nova_compute[239846]: 2026-02-02 17:56:52.144 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:53 np0005605476 podman[265353]: 2026-02-02 17:56:53.256979237 +0000 UTC m=+0.065812654 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb  2 12:56:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 KiB/s wr, 71 op/s
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:56:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.064890504 +0000 UTC m=+0.041235433 container create f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:56:54 np0005605476 systemd[1]: Started libpod-conmon-f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc.scope.
Feb  2 12:56:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:56:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:54 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:56:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.122294351 +0000 UTC m=+0.098639300 container init f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.131074018 +0000 UTC m=+0.107418947 container start f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 12:56:54 np0005605476 focused_neumann[265506]: 167 167
Feb  2 12:56:54 np0005605476 systemd[1]: libpod-f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc.scope: Deactivated successfully.
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.14037932 +0000 UTC m=+0.116724249 container attach f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.14073729 +0000 UTC m=+0.117082219 container died f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.050712594 +0000 UTC m=+0.027057543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-368b21b899aa5f1ba6b828c995b28a1ec915f37244c138d336c7c9a32b9f5edc-merged.mount: Deactivated successfully.
Feb  2 12:56:54 np0005605476 podman[265490]: 2026-02-02 17:56:54.17557438 +0000 UTC m=+0.151919319 container remove f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_neumann, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:56:54 np0005605476 systemd[1]: libpod-conmon-f9821155b7bc117e363bce9799b75b913c9d35e568b04c6e6cf5c596a5b3cdcc.scope: Deactivated successfully.
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.311737275 +0000 UTC m=+0.050165234 container create c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:56:54 np0005605476 systemd[1]: Started libpod-conmon-c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa.scope.
Feb  2 12:56:54 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:54 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.283795508 +0000 UTC m=+0.022223557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.388566569 +0000 UTC m=+0.126994608 container init c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.395663819 +0000 UTC m=+0.134091818 container start c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.400117495 +0000 UTC m=+0.138545454 container attach c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:56:54 np0005605476 nova_compute[239846]: 2026-02-02 17:56:54.618 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:54 np0005605476 boring_williams[265546]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:56:54 np0005605476 boring_williams[265546]: --> All data devices are unavailable
Feb  2 12:56:54 np0005605476 systemd[1]: libpod-c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa.scope: Deactivated successfully.
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.80716552 +0000 UTC m=+0.545593479 container died c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:56:54 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4a3c6432e944045c46769932978144e5203ce540ba036e4cc79626e8f4abbe7a-merged.mount: Deactivated successfully.
Feb  2 12:56:54 np0005605476 podman[265530]: 2026-02-02 17:56:54.860730559 +0000 UTC m=+0.599158548 container remove c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williams, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:56:54 np0005605476 systemd[1]: libpod-conmon-c3b2c598bc58ce91099b016e1d974e410d9f85de41cd5bd541c5e10b0269b7fa.scope: Deactivated successfully.
Feb  2 12:56:55 np0005605476 nova_compute[239846]: 2026-02-02 17:56:55.199 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.236208425 +0000 UTC m=+0.035796180 container create 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 12:56:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:55Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.7
Feb  2 12:56:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:55Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f7:77:e4 10.100.0.7
Feb  2 12:56:55 np0005605476 systemd[1]: Started libpod-conmon-55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401.scope.
Feb  2 12:56:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.305963849 +0000 UTC m=+0.105551694 container init 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.312590016 +0000 UTC m=+0.112177811 container start 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 12:56:55 np0005605476 xenodochial_albattani[265652]: 167 167
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.31626854 +0000 UTC m=+0.115856325 container attach 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.219881255 +0000 UTC m=+0.019469040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:55 np0005605476 systemd[1]: libpod-55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401.scope: Deactivated successfully.
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.317271718 +0000 UTC m=+0.116859503 container died 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:56:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a0a26751f60309a07dd179653625509b88a017444b3d919464604d5b372d11b1-merged.mount: Deactivated successfully.
Feb  2 12:56:55 np0005605476 podman[265636]: 2026-02-02 17:56:55.360560867 +0000 UTC m=+0.160148622 container remove 55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:56:55 np0005605476 systemd[1]: libpod-conmon-55c9dc35d8eda304bac607f9ccd6b0375f3988f3a48e8e0df3b5d64e423c0401.scope: Deactivated successfully.
Feb  2 12:56:55 np0005605476 podman[265676]: 2026-02-02 17:56:55.531244435 +0000 UTC m=+0.050081642 container create 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:56:55 np0005605476 systemd[1]: Started libpod-conmon-31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2.scope.
Feb  2 12:56:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 105 op/s
Feb  2 12:56:55 np0005605476 podman[265676]: 2026-02-02 17:56:55.512203579 +0000 UTC m=+0.031040766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed74bb9873634b4345b5b2c9a9767953dc7e49cd8eca0c0c1d65709c1e97d72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:56:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed74bb9873634b4345b5b2c9a9767953dc7e49cd8eca0c0c1d65709c1e97d72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed74bb9873634b4345b5b2c9a9767953dc7e49cd8eca0c0c1d65709c1e97d72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed74bb9873634b4345b5b2c9a9767953dc7e49cd8eca0c0c1d65709c1e97d72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:55 np0005605476 podman[265676]: 2026-02-02 17:56:55.635176572 +0000 UTC m=+0.154013749 container init 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:56:55 np0005605476 podman[265676]: 2026-02-02 17:56:55.643335402 +0000 UTC m=+0.162172579 container start 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:56:55 np0005605476 podman[265676]: 2026-02-02 17:56:55.646997175 +0000 UTC m=+0.165834392 container attach 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]: {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    "0": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "devices": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "/dev/loop3"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            ],
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_name": "ceph_lv0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_size": "21470642176",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "name": "ceph_lv0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "tags": {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_name": "ceph",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.crush_device_class": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.encrypted": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.objectstore": "bluestore",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_id": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.vdo": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.with_tpm": "0"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            },
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "vg_name": "ceph_vg0"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        }
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    ],
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    "1": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "devices": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "/dev/loop4"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            ],
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_name": "ceph_lv1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_size": "21470642176",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "name": "ceph_lv1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "tags": {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_name": "ceph",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.crush_device_class": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.encrypted": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.objectstore": "bluestore",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_id": "1",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.vdo": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.with_tpm": "0"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            },
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "vg_name": "ceph_vg1"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        }
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    ],
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    "2": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "devices": [
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "/dev/loop5"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            ],
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_name": "ceph_lv2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_size": "21470642176",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "name": "ceph_lv2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "tags": {
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.cluster_name": "ceph",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.crush_device_class": "",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.encrypted": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.objectstore": "bluestore",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osd_id": "2",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.vdo": "0",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:                "ceph.with_tpm": "0"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            },
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "type": "block",
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:            "vg_name": "ceph_vg2"
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:        }
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]:    ]
Feb  2 12:56:55 np0005605476 quizzical_albattani[265692]: }
Feb  2 12:56:55 np0005605476 systemd[1]: libpod-31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2.scope: Deactivated successfully.
Feb  2 12:56:55 np0005605476 podman[265701]: 2026-02-02 17:56:55.943489296 +0000 UTC m=+0.018293836 container died 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb  2 12:56:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3ed74bb9873634b4345b5b2c9a9767953dc7e49cd8eca0c0c1d65709c1e97d72-merged.mount: Deactivated successfully.
Feb  2 12:56:55 np0005605476 podman[265701]: 2026-02-02 17:56:55.975593811 +0000 UTC m=+0.050398331 container remove 31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_albattani, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:56:55 np0005605476 systemd[1]: libpod-conmon-31a8c0badde2558edd933dca4fdffe064dd00ccd3403fe84c59a0fd292a2aff2.scope: Deactivated successfully.
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.363396894 +0000 UTC m=+0.038708221 container create 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:56:56 np0005605476 systemd[1]: Started libpod-conmon-322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e.scope.
Feb  2 12:56:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.430220646 +0000 UTC m=+0.105531983 container init 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.434829376 +0000 UTC m=+0.110140693 container start 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 12:56:56 np0005605476 pedantic_stonebraker[265795]: 167 167
Feb  2 12:56:56 np0005605476 systemd[1]: libpod-322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e.scope: Deactivated successfully.
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.439483827 +0000 UTC m=+0.114795154 container attach 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.439974201 +0000 UTC m=+0.115285498 container died 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.346352794 +0000 UTC m=+0.021664111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ebbc4880a5ad813f993c45f55c1473fd9a802755714f3e6a4ad07a48f2ba1d3f-merged.mount: Deactivated successfully.
Feb  2 12:56:56 np0005605476 podman[265779]: 2026-02-02 17:56:56.473409973 +0000 UTC m=+0.148721280 container remove 322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 12:56:56 np0005605476 systemd[1]: libpod-conmon-322b864c6a942a8d8bf4a86b5b18fe53158e1e5f1f7405409feabcf0db38342e.scope: Deactivated successfully.
Feb  2 12:56:56 np0005605476 podman[265820]: 2026-02-02 17:56:56.580556651 +0000 UTC m=+0.031308083 container create 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  2 12:56:56 np0005605476 systemd[1]: Started libpod-conmon-0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b.scope.
Feb  2 12:56:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:56:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc479bf8d43291850a4e875958531ae2863deb657acb66db2f0c9a57b9ef75f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc479bf8d43291850a4e875958531ae2863deb657acb66db2f0c9a57b9ef75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc479bf8d43291850a4e875958531ae2863deb657acb66db2f0c9a57b9ef75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc479bf8d43291850a4e875958531ae2863deb657acb66db2f0c9a57b9ef75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:56:56 np0005605476 podman[265820]: 2026-02-02 17:56:56.567534994 +0000 UTC m=+0.018286446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:56:56 np0005605476 podman[265820]: 2026-02-02 17:56:56.677752908 +0000 UTC m=+0.128504350 container init 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:56:56 np0005605476 podman[265820]: 2026-02-02 17:56:56.683361496 +0000 UTC m=+0.134112948 container start 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 12:56:56 np0005605476 podman[265820]: 2026-02-02 17:56:56.688664236 +0000 UTC m=+0.139415688 container attach 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:56:56 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:56Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:77:e4 10.100.0.7
Feb  2 12:56:56 np0005605476 ovn_controller[146041]: 2026-02-02T17:56:56Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:77:e4 10.100.0.7
Feb  2 12:56:57 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:56:57.104 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:56:57 np0005605476 nova_compute[239846]: 2026-02-02 17:56:57.145 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:57 np0005605476 lvm[265914]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:56:57 np0005605476 lvm[265915]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:56:57 np0005605476 lvm[265914]: VG ceph_vg0 finished
Feb  2 12:56:57 np0005605476 lvm[265915]: VG ceph_vg1 finished
Feb  2 12:56:57 np0005605476 lvm[265917]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:56:57 np0005605476 lvm[265917]: VG ceph_vg2 finished
Feb  2 12:56:57 np0005605476 vibrant_kirch[265836]: {}
Feb  2 12:56:57 np0005605476 systemd[1]: libpod-0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b.scope: Deactivated successfully.
Feb  2 12:56:57 np0005605476 systemd[1]: libpod-0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b.scope: Consumed 1.042s CPU time.
Feb  2 12:56:57 np0005605476 podman[265820]: 2026-02-02 17:56:57.443840166 +0000 UTC m=+0.894591588 container died 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 12:56:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9bc479bf8d43291850a4e875958531ae2863deb657acb66db2f0c9a57b9ef75f-merged.mount: Deactivated successfully.
Feb  2 12:56:57 np0005605476 podman[265820]: 2026-02-02 17:56:57.486381475 +0000 UTC m=+0.937132897 container remove 0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 12:56:57 np0005605476 systemd[1]: libpod-conmon-0b78cc74068397b54c3ce91c7f69545494c5da0425191f8f3001cc88b328649b.scope: Deactivated successfully.
Feb  2 12:56:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:56:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:56:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 167 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 972 KiB/s rd, 13 KiB/s wr, 60 op/s
Feb  2 12:56:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:58 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:56:58 np0005605476 podman[265957]: 2026-02-02 17:56:58.613135611 +0000 UTC m=+0.066603947 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb  2 12:56:58 np0005605476 nova_compute[239846]: 2026-02-02 17:56:58.881 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:56:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 52 op/s
Feb  2 12:57:00 np0005605476 nova_compute[239846]: 2026-02-02 17:57:00.201 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 52 op/s
Feb  2 12:57:02 np0005605476 nova_compute[239846]: 2026-02-02 17:57:02.147 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 25 KiB/s wr, 44 op/s
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3338356856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3338356856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:57:05 np0005605476 nova_compute[239846]: 2026-02-02 17:57:05.203 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 29 KiB/s wr, 47 op/s
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Feb  2 12:57:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Feb  2 12:57:07 np0005605476 nova_compute[239846]: 2026-02-02 17:57:07.149 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 47 KiB/s wr, 16 op/s
Feb  2 12:57:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 61 KiB/s wr, 42 op/s
Feb  2 12:57:10 np0005605476 nova_compute[239846]: 2026-02-02 17:57:10.204 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Feb  2 12:57:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Feb  2 12:57:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.465 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.466 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.478 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.551 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.551 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.561 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.562 239853 INFO nova.compute.claims [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:57:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 77 KiB/s wr, 54 op/s
Feb  2 12:57:11 np0005605476 nova_compute[239846]: 2026-02-02 17:57:11.690 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219348463' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.192 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.205 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.210 239853 DEBUG nova.compute.provider_tree [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.275 239853 DEBUG nova.scheduler.client.report [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.306 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.307 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.358 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.358 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.375 239853 INFO nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.390 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.429 239853 INFO nova.virt.block_device [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Booting with volume 0b5a22b3-9c52-4137-9a27-08ee44fd7869 at /dev/vda#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.602 239853 DEBUG os_brick.utils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.604 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.615 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.615 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[f82abe10-3b30-4f79-8b52-bdac2593c7f6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.616 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.622 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.622 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[c4471adc-d049-4853-9d87-ca69c729be44]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.623 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.628 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.629 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[2d87e46a-ef65-49a9-976c-544a3c044e52]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.630 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0306a428-e4f4-4909-ac5c-0f65c6e604dd]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.630 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.645 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.647 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.647 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.647 239853 DEBUG os_brick.initiator.connectors.lightos [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.648 239853 DEBUG os_brick.utils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:57:12 np0005605476 nova_compute[239846]: 2026-02-02 17:57:12.648 239853 DEBUG nova.virt.block_device [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating existing volume attachment record: 5a46e0a9-35c8-46d4-a032-f1a5a2f8845b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:57:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1550776965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.267 239853 DEBUG nova.policy [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7b8ea09739a4455840062f2ad81089a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdfa033071c341d29a9815152416777f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:57:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1734030311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 72 KiB/s wr, 49 op/s
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.806 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.807 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.808 239853 INFO nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Creating image(s)#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.808 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.809 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Ensure instance console log exists: /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.809 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.809 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:13 np0005605476 nova_compute[239846]: 2026-02-02 17:57:13.809 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:14 np0005605476 nova_compute[239846]: 2026-02-02 17:57:14.226 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Successfully created port: aaa7812a-02d9-4554-baab-d6a7c323f0fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:57:15 np0005605476 nova_compute[239846]: 2026-02-02 17:57:15.206 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 34 KiB/s wr, 66 op/s
Feb  2 12:57:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.294 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Successfully updated port: aaa7812a-02d9-4554-baab-d6a7c323f0fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.310 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.311 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquired lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.311 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.408 239853 DEBUG nova.compute.manager [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.408 239853 DEBUG nova.compute.manager [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing instance network info cache due to event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.408 239853 DEBUG oslo_concurrency.lockutils [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:16 np0005605476 nova_compute[239846]: 2026-02-02 17:57:16.452 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:57:17 np0005605476 nova_compute[239846]: 2026-02-02 17:57:17.195 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 33 KiB/s wr, 65 op/s
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.022 239853 DEBUG nova.network.neutron [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating instance_info_cache with network_info: [{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.045 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Releasing lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.045 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Instance network_info: |[{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.045 239853 DEBUG oslo_concurrency.lockutils [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.046 239853 DEBUG nova.network.neutron [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.049 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Start _get_guest_xml network_info=[{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '5a46e0a9-35c8-46d4-a032-f1a5a2f8845b', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0b5a22b3-9c52-4137-9a27-08ee44fd7869', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0b5a22b3-9c52-4137-9a27-08ee44fd7869', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cf91512f-2990-45f5-9c60-7abecad4d703', 'attached_at': '', 'detached_at': '', 'volume_id': '0b5a22b3-9c52-4137-9a27-08ee44fd7869', 'serial': '0b5a22b3-9c52-4137-9a27-08ee44fd7869'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.054 239853 WARNING nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.059 239853 DEBUG nova.virt.libvirt.host [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.060 239853 DEBUG nova.virt.libvirt.host [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.071 239853 DEBUG nova.virt.libvirt.host [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.072 239853 DEBUG nova.virt.libvirt.host [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.072 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.073 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.073 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.073 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.074 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.074 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.074 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.074 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.074 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.075 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.075 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.075 239853 DEBUG nova.virt.hardware [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.102 239853 DEBUG nova.storage.rbd_utils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image cf91512f-2990-45f5-9c60-7abecad4d703_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.106 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514298724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.607 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.653 239853 DEBUG nova.virt.libvirt.vif [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1978164842',display_name='tempest-TestVolumeBootPattern-server-1978164842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1978164842',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-djtwajn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:12Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=cf91512f-2990-45f5-9c60-7abecad4d703,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.654 239853 DEBUG nova.network.os_vif_util [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.655 239853 DEBUG nova.network.os_vif_util [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.655 239853 DEBUG nova.objects.instance [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'pci_devices' on Instance uuid cf91512f-2990-45f5-9c60-7abecad4d703 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.674 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <uuid>cf91512f-2990-45f5-9c60-7abecad4d703</uuid>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <name>instance-00000015</name>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestVolumeBootPattern-server-1978164842</nova:name>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:57:18</nova:creationTime>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:user uuid="d7b8ea09739a4455840062f2ad81089a">tempest-TestVolumeBootPattern-1185251615-project-member</nova:user>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:project uuid="cdfa033071c341d29a9815152416777f">tempest-TestVolumeBootPattern-1185251615</nova:project>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <nova:port uuid="aaa7812a-02d9-4554-baab-d6a7c323f0fc">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="serial">cf91512f-2990-45f5-9c60-7abecad4d703</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="uuid">cf91512f-2990-45f5-9c60-7abecad4d703</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/cf91512f-2990-45f5-9c60-7abecad4d703_disk.config">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-0b5a22b3-9c52-4137-9a27-08ee44fd7869">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <serial>0b5a22b3-9c52-4137-9a27-08ee44fd7869</serial>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:81:8b:83"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <target dev="tapaaa7812a-02"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/console.log" append="off"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:57:18 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:57:18 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:57:18 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:57:18 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.676 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Preparing to wait for external event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.676 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.676 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.676 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.677 239853 DEBUG nova.virt.libvirt.vif [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1978164842',display_name='tempest-TestVolumeBootPattern-server-1978164842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1978164842',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-djtwajn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:12Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=cf91512f-2990-45f5-9c60-7abecad4d703,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.677 239853 DEBUG nova.network.os_vif_util [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.678 239853 DEBUG nova.network.os_vif_util [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.678 239853 DEBUG os_vif [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.678 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.679 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.679 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.682 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.682 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaaa7812a-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.682 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaaa7812a-02, col_values=(('external_ids', {'iface-id': 'aaa7812a-02d9-4554-baab-d6a7c323f0fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:8b:83', 'vm-uuid': 'cf91512f-2990-45f5-9c60-7abecad4d703'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:18 np0005605476 NetworkManager[49022]: <info>  [1770055038.6847] manager: (tapaaa7812a-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.685 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.689 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.689 239853 INFO os_vif [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02')#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.737 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.738 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.738 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] No VIF found with MAC fa:16:3e:81:8b:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.738 239853 INFO nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Using config drive#033[00m
Feb  2 12:57:18 np0005605476 nova_compute[239846]: 2026-02-02 17:57:18.753 239853 DEBUG nova.storage.rbd_utils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image cf91512f-2990-45f5-9c60-7abecad4d703_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 235 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 6.4 MiB/s wr, 70 op/s
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.154 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.154 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.207 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.248 239853 INFO nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Creating config drive at /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.252 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvd7im6sy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.271 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.376 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvd7im6sy" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.393 239853 DEBUG nova.storage.rbd_utils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] rbd image cf91512f-2990-45f5-9c60-7abecad4d703_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.396 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config cf91512f-2990-45f5-9c60-7abecad4d703_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.453 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.454 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.463 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.463 239853 INFO nova.compute.claims [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.577 239853 DEBUG nova.network.neutron [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updated VIF entry in instance network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.578 239853 DEBUG nova.network.neutron [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating instance_info_cache with network_info: [{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.599 239853 DEBUG oslo_concurrency.lockutils [req-51f2f858-c5df-4f79-bdf9-bd0fe7c41ca6 req-dc201318-25dd-4951-9fcd-1448d052c7fa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.667 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.780 239853 DEBUG oslo_concurrency.processutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config cf91512f-2990-45f5-9c60-7abecad4d703_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.781 239853 INFO nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Deleting local config drive /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703/disk.config because it was imported into RBD.#033[00m
Feb  2 12:57:20 np0005605476 NetworkManager[49022]: <info>  [1770055040.8233] manager: (tapaaa7812a-02): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Feb  2 12:57:20 np0005605476 kernel: tapaaa7812a-02: entered promiscuous mode
Feb  2 12:57:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:20Z|00199|binding|INFO|Claiming lport aaa7812a-02d9-4554-baab-d6a7c323f0fc for this chassis.
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.830 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:20Z|00200|binding|INFO|aaa7812a-02d9-4554-baab-d6a7c323f0fc: Claiming fa:16:3e:81:8b:83 10.100.0.13
Feb  2 12:57:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:20Z|00201|binding|INFO|Setting lport aaa7812a-02d9-4554-baab-d6a7c323f0fc ovn-installed in OVS
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.839 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:20 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:20Z|00202|binding|INFO|Setting lport aaa7812a-02d9-4554-baab-d6a7c323f0fc up in Southbound
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.842 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:8b:83 10.100.0.13'], port_security=['fa:16:3e:81:8b:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cf91512f-2990-45f5-9c60-7abecad4d703', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=aaa7812a-02d9-4554-baab-d6a7c323f0fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.843 155391 INFO neutron.agent.ovn.metadata.agent [-] Port aaa7812a-02d9-4554-baab-d6a7c323f0fc in datapath ac1b83e6-8e85-484a-9623-8960b1107077 bound to our chassis#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.846 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:57:20 np0005605476 systemd-machined[208080]: New machine qemu-21-instance-00000015.
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.863 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ab9c8c-18e0-4a3b-bec5-5d2e2f3a6d56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Feb  2 12:57:20 np0005605476 systemd-udevd[266152]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.889 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f436bc6a-fa2d-4f67-ba47-94a9b508292a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 NetworkManager[49022]: <info>  [1770055040.8945] device (tapaaa7812a-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:57:20 np0005605476 NetworkManager[49022]: <info>  [1770055040.8953] device (tapaaa7812a-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.894 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[331d088b-50f7-421a-ae0e-4b66cf4eafa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.920 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[11f0bb7d-7b87-4d9e-8722-ef598ba49008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.945 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[489940c0-38bb-44dc-bf37-4c60c4a5e361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423963, 'reachable_time': 31310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266162, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.956 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[de0ed095-1b97-4581-968d-a470daf88960]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423971, 'tstamp': 423971}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266164, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423973, 'tstamp': 423973}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266164, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.961 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.963 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:20 np0005605476 nova_compute[239846]: 2026-02-02 17:57:20.964 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.964 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.964 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.964 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:20.965 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991728993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.200 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.205 239853 DEBUG nova.compute.provider_tree [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.223 239853 DEBUG nova.scheduler.client.report [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.268 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.268 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.360 239853 DEBUG nova.compute.manager [req-11e3af8a-2c6b-4914-89dd-4bd69853369d req-29af7051-89b9-4e1d-a713-3316846566a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.361 239853 DEBUG oslo_concurrency.lockutils [req-11e3af8a-2c6b-4914-89dd-4bd69853369d req-29af7051-89b9-4e1d-a713-3316846566a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.361 239853 DEBUG oslo_concurrency.lockutils [req-11e3af8a-2c6b-4914-89dd-4bd69853369d req-29af7051-89b9-4e1d-a713-3316846566a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.362 239853 DEBUG oslo_concurrency.lockutils [req-11e3af8a-2c6b-4914-89dd-4bd69853369d req-29af7051-89b9-4e1d-a713-3316846566a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.362 239853 DEBUG nova.compute.manager [req-11e3af8a-2c6b-4914-89dd-4bd69853369d req-29af7051-89b9-4e1d-a713-3316846566a2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Processing event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.400 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.401 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.481 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055041.4807618, cf91512f-2990-45f5-9c60-7abecad4d703 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.483 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] VM Started (Lifecycle Event)#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.485 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.489 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.493 239853 INFO nova.virt.libvirt.driver [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Instance spawned successfully.#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.493 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.532 239853 INFO nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.585 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.589 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.589 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.589 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.590 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.590 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.591 239853 DEBUG nova.virt.libvirt.driver [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.594 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 283 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 11 MiB/s wr, 78 op/s
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.639 239853 DEBUG nova.policy [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c00d8fbb7f314affbdd560b88d4ce236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.652 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.653 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055041.4811144, cf91512f-2990-45f5-9c60-7abecad4d703 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.653 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.770 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.840 239853 INFO nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.841 239853 DEBUG nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.889 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.893 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055041.4882548, cf91512f-2990-45f5-9c60-7abecad4d703 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.893 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.921 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.924 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.977 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:21 np0005605476 nova_compute[239846]: 2026-02-02 17:57:21.994 239853 INFO nova.compute.manager [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Took 10.47 seconds to build instance.#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.013 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.015 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.015 239853 INFO nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Creating image(s)#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.043 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.064 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.094 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.099 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.126 239853 DEBUG oslo_concurrency.lockutils [None req-1cebf6c3-4332-4acf-a3d9-2df1ecc3881d d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.127 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.128 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.155 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.156 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.157 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.158 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.203 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.207 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.226 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.300 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.300 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.307 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.307 239853 INFO nova.compute.claims [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.493 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.588 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.608 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Successfully created port: ac3697bb-389e-4638-84a5-0859a2819752 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.641 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] resizing rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.762 239853 DEBUG nova.objects.instance [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'migration_context' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.779 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.780 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Ensure instance console log exists: /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.780 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.780 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:22 np0005605476 nova_compute[239846]: 2026-02-02 17:57:22.781 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:23 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:23 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42917475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.041 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.045 239853 DEBUG nova.compute.provider_tree [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.113 239853 DEBUG nova.scheduler.client.report [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.262 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.262 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.342 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.343 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.370 239853 INFO nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.392 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.435 239853 INFO nova.virt.block_device [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Booting with volume afd56270-31f2-45f6-8185-190fa9bfd997 at /dev/vda#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.441 239853 DEBUG nova.compute.manager [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.442 239853 DEBUG oslo_concurrency.lockutils [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.442 239853 DEBUG oslo_concurrency.lockutils [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.442 239853 DEBUG oslo_concurrency.lockutils [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.443 239853 DEBUG nova.compute.manager [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] No waiting events found dispatching network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.443 239853 WARNING nova.compute.manager [req-b8ed7613-4784-40a5-8e42-cc6554d4e595 req-dc30644b-d204-4a41-bd69-0c6d6d3b373d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received unexpected event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc for instance with vm_state active and task_state None.#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.569 239853 DEBUG nova.policy [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3de5c2f3ec44d4684754f1707ba5236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:57:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 283 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 9.4 MiB/s wr, 69 op/s
Feb  2 12:57:23 np0005605476 podman[266397]: 2026-02-02 17:57:23.607238147 +0000 UTC m=+0.061195425 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.622 239853 DEBUG os_brick.utils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.623 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.631 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.632 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[64c23085-2b7f-4261-b64b-d60d1669ff0d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.633 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.637 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.638 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[58c0309d-b9d9-4821-a0d4-d6f86325d3cc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.639 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.644 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.644 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[99e400b1-633f-49a9-8d51-d334cf24e9a3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.645 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[015d0e84-83b8-4b4b-82f5-37a9685b689e]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.645 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.661 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.663 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.663 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.664 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.664 239853 DEBUG os_brick.utils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] <== get_connector_properties: return (41ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.665 239853 DEBUG nova.virt.block_device [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updating existing volume attachment record: 8d3445cc-03f7-4e00-9f89-34de8a773d8d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:57:23 np0005605476 nova_compute[239846]: 2026-02-02 17:57:23.686 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:24 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606434014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.513 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Successfully updated port: ac3697bb-389e-4638-84a5-0859a2819752 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.545 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.546 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquired lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.546 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.786 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.866 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.869 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.870 239853 INFO nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Creating image(s)#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.871 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.872 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Ensure instance console log exists: /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.873 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.876 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:24 np0005605476 nova_compute[239846]: 2026-02-02 17:57:24.876 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:25 np0005605476 nova_compute[239846]: 2026-02-02 17:57:25.031 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Successfully created port: 0710648a-98cc-4dd5-bb88-9ea33cef69c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:57:25 np0005605476 nova_compute[239846]: 2026-02-02 17:57:25.209 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:25 np0005605476 nova_compute[239846]: 2026-02-02 17:57:25.562 239853 DEBUG nova.compute.manager [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-changed-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:25 np0005605476 nova_compute[239846]: 2026-02-02 17:57:25.563 239853 DEBUG nova.compute.manager [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Refreshing instance network info cache due to event network-changed-ac3697bb-389e-4638-84a5-0859a2819752. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:25 np0005605476 nova_compute[239846]: 2026-02-02 17:57:25.563 239853 DEBUG oslo_concurrency.lockutils [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 315 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 11 MiB/s wr, 143 op/s
Feb  2 12:57:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.173 239853 DEBUG nova.network.neutron [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updating instance_info_cache with network_info: [{"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.206 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Releasing lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.207 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Instance network_info: |[{"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.207 239853 DEBUG oslo_concurrency.lockutils [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.207 239853 DEBUG nova.network.neutron [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Refreshing network info cache for port ac3697bb-389e-4638-84a5-0859a2819752 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.210 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Start _get_guest_xml network_info=[{"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.214 239853 WARNING nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.218 239853 DEBUG nova.virt.libvirt.host [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.218 239853 DEBUG nova.virt.libvirt.host [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.221 239853 DEBUG nova.virt.libvirt.host [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.222 239853 DEBUG nova.virt.libvirt.host [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.222 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.222 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.223 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.223 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.223 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.223 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.223 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.224 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.224 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.224 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.224 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.224 239853 DEBUG nova.virt.hardware [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.227 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.474 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Successfully updated port: 0710648a-98cc-4dd5-bb88-9ea33cef69c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.510 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.510 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquired lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.510 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.737 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:57:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253314817' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.765 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.790 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.794 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.871 239853 DEBUG nova.compute.manager [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.872 239853 DEBUG nova.compute.manager [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing instance network info cache due to event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.872 239853 DEBUG oslo_concurrency.lockutils [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.872 239853 DEBUG oslo_concurrency.lockutils [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:26 np0005605476 nova_compute[239846]: 2026-02-02 17:57:26.872 239853 DEBUG nova.network.neutron [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:27 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:27 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604004999' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.365 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.366 239853 DEBUG nova.virt.libvirt.vif [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2072601040',display_name='tempest-TestEncryptedCinderVolumes-server-2072601040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2072601040',id=22,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCREanuEFPYE+eF4ceZLxhPDOcYQXJ3siOHiQQjA0XJeV9gs5eVNtGx+kCBb/xcJWUCobFqLGNuv1eGmJgYbbAp95zZtxlyFHNp8ldg9W1Yueybe1fM3snSM6n8XagKdBA==',key_name='tempest-keypair-1953777832',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-yn0fkm02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.366 239853 DEBUG nova.network.os_vif_util [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.367 239853 DEBUG nova.network.os_vif_util [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.368 239853 DEBUG nova.objects.instance [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.383 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <uuid>d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf</uuid>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <name>instance-00000016</name>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-2072601040</nova:name>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:57:26</nova:creationTime>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:user uuid="c00d8fbb7f314affbdd560b88d4ce236">tempest-TestEncryptedCinderVolumes-1563506128-project-member</nova:user>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:project uuid="f1ccd20d4c994d098fc29da09fe94797">tempest-TestEncryptedCinderVolumes-1563506128</nova:project>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <nova:port uuid="ac3697bb-389e-4638-84a5-0859a2819752">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="serial">d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="uuid">d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:7b:4f:aa"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <target dev="tapac3697bb-38"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/console.log" append="off"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:57:27 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:57:27 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:57:27 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:57:27 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.384 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Preparing to wait for external event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.384 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.384 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.385 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.385 239853 DEBUG nova.virt.libvirt.vif [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2072601040',display_name='tempest-TestEncryptedCinderVolumes-server-2072601040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2072601040',id=22,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCREanuEFPYE+eF4ceZLxhPDOcYQXJ3siOHiQQjA0XJeV9gs5eVNtGx+kCBb/xcJWUCobFqLGNuv1eGmJgYbbAp95zZtxlyFHNp8ldg9W1Yueybe1fM3snSM6n8XagKdBA==',key_name='tempest-keypair-1953777832',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-yn0fkm02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.386 239853 DEBUG nova.network.os_vif_util [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.386 239853 DEBUG nova.network.os_vif_util [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.387 239853 DEBUG os_vif [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.387 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.388 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.388 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.391 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.391 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac3697bb-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.391 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac3697bb-38, col_values=(('external_ids', {'iface-id': 'ac3697bb-389e-4638-84a5-0859a2819752', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:4f:aa', 'vm-uuid': 'd7fdaddd-b417-4d8e-a3d7-a7132f04c7bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:27 np0005605476 NetworkManager[49022]: <info>  [1770055047.3939] manager: (tapac3697bb-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.395 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.400 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.401 239853 INFO os_vif [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38')#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.459 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.459 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.459 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No VIF found with MAC fa:16:3e:7b:4f:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.460 239853 INFO nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Using config drive#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.476 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.532 239853 DEBUG nova.network.neutron [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updated VIF entry in instance network info cache for port ac3697bb-389e-4638-84a5-0859a2819752. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.533 239853 DEBUG nova.network.neutron [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updating instance_info_cache with network_info: [{"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.550 239853 DEBUG oslo_concurrency.lockutils [req-4fa108c5-11f7-4308-be8a-8e5eef7d46fd req-586df26a-b3f0-4939-a8cb-0c9228ff743b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 330 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 MiB/s wr, 150 op/s
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.733 239853 DEBUG nova.compute.manager [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-changed-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.733 239853 DEBUG nova.compute.manager [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Refreshing instance network info cache due to event network-changed-0710648a-98cc-4dd5-bb88-9ea33cef69c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.733 239853 DEBUG oslo_concurrency.lockutils [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.742 239853 DEBUG nova.network.neutron [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updating instance_info_cache with network_info: [{"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.764 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Releasing lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.764 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Instance network_info: |[{"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.764 239853 DEBUG oslo_concurrency.lockutils [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.764 239853 DEBUG nova.network.neutron [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Refreshing network info cache for port 0710648a-98cc-4dd5-bb88-9ea33cef69c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.767 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Start _get_guest_xml network_info=[{"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '8d3445cc-03f7-4e00-9f89-34de8a773d8d', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'eb6b61fa-cb2c-4e4d-be02-cdb398df790c', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'serial': 'afd56270-31f2-45f6-8185-190fa9bfd997'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.770 239853 WARNING nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.774 239853 DEBUG nova.virt.libvirt.host [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.774 239853 DEBUG nova.virt.libvirt.host [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.777 239853 DEBUG nova.virt.libvirt.host [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.777 239853 DEBUG nova.virt.libvirt.host [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.777 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.777 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.778 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.778 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.778 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.778 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.778 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.779 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.779 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.779 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.779 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.780 239853 DEBUG nova.virt.hardware [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.800 239853 DEBUG nova.storage.rbd_utils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.804 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.825 239853 INFO nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Creating config drive at /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.830 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp8wpo4_t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.951 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp8wpo4_t" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.976 239853 DEBUG nova.storage.rbd_utils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.979 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.995 239853 DEBUG nova.network.neutron [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updated VIF entry in instance network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:27 np0005605476 nova_compute[239846]: 2026-02-02 17:57:27.995 239853 DEBUG nova.network.neutron [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating instance_info_cache with network_info: [{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.017 239853 DEBUG oslo_concurrency.lockutils [req-dbddccfb-c5b4-4700-9f7f-8e1d24529a35 req-4eddd17e-f2b5-4f66-8052-2caa55790233 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.081 239853 DEBUG oslo_concurrency.processutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.082 239853 INFO nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Deleting local config drive /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf/disk.config because it was imported into RBD.#033[00m
Feb  2 12:57:28 np0005605476 kernel: tapac3697bb-38: entered promiscuous mode
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.1204] manager: (tapac3697bb-38): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Feb  2 12:57:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:28Z|00203|binding|INFO|Claiming lport ac3697bb-389e-4638-84a5-0859a2819752 for this chassis.
Feb  2 12:57:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:28Z|00204|binding|INFO|ac3697bb-389e-4638-84a5-0859a2819752: Claiming fa:16:3e:7b:4f:aa 10.100.0.11
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.121 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:28Z|00205|binding|INFO|Setting lport ac3697bb-389e-4638-84a5-0859a2819752 ovn-installed in OVS
Feb  2 12:57:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:28Z|00206|binding|INFO|Setting lport ac3697bb-389e-4638-84a5-0859a2819752 up in Southbound
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.132 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:4f:aa 10.100.0.11'], port_security=['fa:16:3e:7b:4f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd7fdaddd-b417-4d8e-a3d7-a7132f04c7bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3aa6d590-93b7-4292-90fc-74a1afc66cb3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=ac3697bb-389e-4638-84a5-0859a2819752) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.133 155391 INFO neutron.agent.ovn.metadata.agent [-] Port ac3697bb-389e-4638-84a5-0859a2819752 in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec bound to our chassis#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.137 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bad2c851-1c12-4a83-9873-6096fe5f4eec#033[00m
Feb  2 12:57:28 np0005605476 systemd-udevd[266593]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.139 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.145 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6ad92-543e-44b8-9f43-d5f70c44e84c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.146 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbad2c851-11 in ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.1481] device (tapac3697bb-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.148 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbad2c851-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.1494] device (tapac3697bb-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:57:28 np0005605476 systemd-machined[208080]: New machine qemu-22-instance-00000016.
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.148 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[647d4080-6e15-41c3-acf0-859e8955d381]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.151 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[52f217e6-0279-49c8-ba7c-938a750dcd71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.159 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[70e788e7-4c78-44cb-9583-1530213019fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.170 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0df708af-3f17-4380-a995-6ff539c39994]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.203 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[efd7da43-8ed5-40fa-a6dd-8bdbf10f80e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.2088] manager: (tapbad2c851-10): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.209 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e86b3f2-a704-42bb-bfcc-6c94be4cf4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.231 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[1a88e659-a789-4630-a3cd-7a6073d99d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.234 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6b37c204-a449-482c-8832-10eba3b66e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.2507] device (tapbad2c851-10): carrier: link connected
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.252 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[389ed2c4-55a2-47ef-a7eb-e231e09af1e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.266 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[aef53117-b818-4a0c-b3ac-b87f584503aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428958, 'reachable_time': 32121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266627, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.277 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a77710e5-453b-4124-aa4e-dfe421556ed1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:54c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428958, 'tstamp': 428958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266628, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.288 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[986bcd26-d9bd-4173-8639-d05d453d36a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428958, 'reachable_time': 32121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266629, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.304 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9ad073-32e8-45f8-92cd-4c69c41eac71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.337 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf78a330-74c4-461b-9823-c52ff82396df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.339 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.339 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.340 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbad2c851-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.341 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:28 np0005605476 kernel: tapbad2c851-10: entered promiscuous mode
Feb  2 12:57:28 np0005605476 NetworkManager[49022]: <info>  [1770055048.3432] manager: (tapbad2c851-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.344 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbad2c851-10, col_values=(('external_ids', {'iface-id': 'ad9a646b-a8d9-417d-9b26-cd7734bca07f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:28 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:28Z|00207|binding|INFO|Releasing lport ad9a646b-a8d9-417d-9b26-cd7734bca07f from this chassis (sb_readonly=0)
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.345 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.352 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:28 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852253933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.353 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.354 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c377ea1e-9399-43ed-b6db-b9ba8f09e5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.355 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:57:28 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:28.356 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'env', 'PROCESS_TAG=haproxy-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bad2c851-1c12-4a83-9873-6096fe5f4eec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.381 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.491 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055048.4906046, d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.491 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] VM Started (Lifecycle Event)#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.517 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.521 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055048.4907053, d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.522 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.524 239853 DEBUG os_brick.encryptors [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Using volume encryption metadata '{'encryption_key_id': '822ea0f7-8961-4947-906d-091a5d24d69e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'eb6b61fa-cb2c-4e4d-be02-cdb398df790c', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.527 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.544 239853 DEBUG barbicanclient.v1.secrets [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/822ea0f7-8961-4947-906d-091a5d24d69e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.544 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.574 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.575 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.576 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.579 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.600 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.601 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.602 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.631 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.632 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 podman[266703]: 2026-02-02 17:57:28.659349207 +0000 UTC m=+0.040496001 container create 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.660 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.661 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 systemd[1]: Started libpod-conmon-14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a.scope.
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.684 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.685 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:57:28 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7c27ecb09f5118830249c5fadcad2cd0e2fefca708875e0246a5d631a2dd70/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.714 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.714 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 podman[266703]: 2026-02-02 17:57:28.717430273 +0000 UTC m=+0.098577087 container init 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 12:57:28 np0005605476 podman[266703]: 2026-02-02 17:57:28.724045689 +0000 UTC m=+0.105192483 container start 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:57:28 np0005605476 podman[266703]: 2026-02-02 17:57:28.639643102 +0000 UTC m=+0.020789916 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.736 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.737 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [NOTICE]   (266741) : New worker (266747) forked
Feb  2 12:57:28 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [NOTICE]   (266741) : Loading success.
Feb  2 12:57:28 np0005605476 podman[266716]: 2026-02-02 17:57:28.759899029 +0000 UTC m=+0.072980766 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.784 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.784 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.812 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.813 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.840 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:28 np0005605476 nova_compute[239846]: 2026-02-02 17:57:28.840 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.160 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.161 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.181 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.182 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.221 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.222 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.257 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.257 239853 INFO barbicanclient.base [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/822ea0f7-8961-4947-906d-091a5d24d69e#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.313 239853 DEBUG barbicanclient.client [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.314 239853 DEBUG nova.virt.libvirt.host [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <volume>afd56270-31f2-45f6-8185-190fa9bfd997</volume>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:57:29 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:57:29 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.341 239853 DEBUG nova.virt.libvirt.vif [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-681301186',display_name='tempest-TransferEncryptedVolumeTest-server-681301186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-681301186',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-3q91ajip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:23Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=eb6b61fa-cb2c-4e4d-be02-cdb398df790c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.341 239853 DEBUG nova.network.os_vif_util [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.342 239853 DEBUG nova.network.os_vif_util [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.343 239853 DEBUG nova.objects.instance [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb6b61fa-cb2c-4e4d-be02-cdb398df790c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.355 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <uuid>eb6b61fa-cb2c-4e4d-be02-cdb398df790c</uuid>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <name>instance-00000017</name>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-681301186</nova:name>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:57:27</nova:creationTime>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:user uuid="a3de5c2f3ec44d4684754f1707ba5236">tempest-TransferEncryptedVolumeTest-1386167090-project-member</nova:user>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:project uuid="224fb1fcaf0e4ffb9c3e3e7792ff25c6">tempest-TransferEncryptedVolumeTest-1386167090</nova:project>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <nova:port uuid="0710648a-98cc-4dd5-bb88-9ea33cef69c2">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="serial">eb6b61fa-cb2c-4e4d-be02-cdb398df790c</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="uuid">eb6b61fa-cb2c-4e4d-be02-cdb398df790c</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <serial>afd56270-31f2-45f6-8185-190fa9bfd997</serial>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="1a35b267-41a9-4eab-978a-246ce97162cb"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:e0:fb:09"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <target dev="tap0710648a-98"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/console.log" append="off"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:57:29 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:57:29 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:57:29 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:57:29 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.355 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Preparing to wait for external event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.356 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.356 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.356 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.357 239853 DEBUG nova.virt.libvirt.vif [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-681301186',display_name='tempest-TransferEncryptedVolumeTest-server-681301186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-681301186',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-3q91ajip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:57:23Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=eb6b61fa-cb2c-4e4d-be02-cdb398df790c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.357 239853 DEBUG nova.network.os_vif_util [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.358 239853 DEBUG nova.network.os_vif_util [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.358 239853 DEBUG os_vif [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.359 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.359 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.360 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.363 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.363 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0710648a-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.363 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0710648a-98, col_values=(('external_ids', {'iface-id': '0710648a-98cc-4dd5-bb88-9ea33cef69c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:fb:09', 'vm-uuid': 'eb6b61fa-cb2c-4e4d-be02-cdb398df790c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.365 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:29 np0005605476 NetworkManager[49022]: <info>  [1770055049.3659] manager: (tap0710648a-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.367 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.371 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.372 239853 INFO os_vif [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98')#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.430 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.431 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.431 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No VIF found with MAC fa:16:3e:e0:fb:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.431 239853 INFO nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Using config drive#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.452 239853 DEBUG nova.storage.rbd_utils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.459 239853 DEBUG nova.network.neutron [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updated VIF entry in instance network info cache for port 0710648a-98cc-4dd5-bb88-9ea33cef69c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.459 239853 DEBUG nova.network.neutron [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updating instance_info_cache with network_info: [{"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.484 239853 DEBUG oslo_concurrency.lockutils [req-8d857348-21cf-4965-a68e-8ddaad912430 req-1c98a946-458a-47e5-beea-a1e1dcc4472f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 330 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 MiB/s wr, 152 op/s
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.793 239853 INFO nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Creating config drive at /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.797 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpup2zcbg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.826 239853 DEBUG nova.compute.manager [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.826 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.826 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.827 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.827 239853 DEBUG nova.compute.manager [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Processing event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.827 239853 DEBUG nova.compute.manager [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.827 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.827 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.828 239853 DEBUG oslo_concurrency.lockutils [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.828 239853 DEBUG nova.compute.manager [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] No waiting events found dispatching network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.828 239853 WARNING nova.compute.manager [req-571ef8e4-cb78-4bbc-af33-fdb4fef480bb req-84203a8d-cdbd-428b-a09c-4e2219bd02dd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received unexpected event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.829 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.831 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055049.831543, d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.832 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.836 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.839 239853 INFO nova.virt.libvirt.driver [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Instance spawned successfully.#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.840 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.852 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.855 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.865 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.866 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.866 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.867 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.867 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.867 239853 DEBUG nova.virt.libvirt.driver [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.872 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.916 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpup2zcbg_" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.935 239853 DEBUG nova.storage.rbd_utils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.938 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.954 239853 INFO nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Took 7.94 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:57:29 np0005605476 nova_compute[239846]: 2026-02-02 17:57:29.955 239853 DEBUG nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.024 239853 INFO nova.compute.manager [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Took 9.60 seconds to build instance.#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.038 239853 DEBUG oslo_concurrency.lockutils [None req-9fd4f123-2b62-4e02-a2ff-0a37f4e5eff8 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.041 239853 DEBUG oslo_concurrency.processutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config eb6b61fa-cb2c-4e4d-be02-cdb398df790c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.041 239853 INFO nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Deleting local config drive /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c/disk.config because it was imported into RBD.#033[00m
Feb  2 12:57:30 np0005605476 kernel: tap0710648a-98: entered promiscuous mode
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.0845] manager: (tap0710648a-98): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Feb  2 12:57:30 np0005605476 systemd-udevd[266623]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.086 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:30Z|00208|binding|INFO|Claiming lport 0710648a-98cc-4dd5-bb88-9ea33cef69c2 for this chassis.
Feb  2 12:57:30 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:30Z|00209|binding|INFO|0710648a-98cc-4dd5-bb88-9ea33cef69c2: Claiming fa:16:3e:e0:fb:09 10.100.0.7
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.093 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.0958] device (tap0710648a-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.0965] device (tap0710648a-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:57:30 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:30Z|00210|binding|INFO|Setting lport 0710648a-98cc-4dd5-bb88-9ea33cef69c2 ovn-installed in OVS
Feb  2 12:57:30 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:30Z|00211|binding|INFO|Setting lport 0710648a-98cc-4dd5-bb88-9ea33cef69c2 up in Southbound
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.099 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:fb:09 10.100.0.7'], port_security=['fa:16:3e:e0:fb:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eb6b61fa-cb2c-4e4d-be02-cdb398df790c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ed4d424-2957-4e57-bfeb-8d8148412d60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=0710648a-98cc-4dd5-bb88-9ea33cef69c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.100 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 0710648a-98cc-4dd5-bb88-9ea33cef69c2 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 bound to our chassis#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.102 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82a7f311-fed2-4a09-8203-270dceb25c76#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.104 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.112 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9f22c6-3c97-40d9-9e1b-76d9efbc309b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.113 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82a7f311-f1 in ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.115 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82a7f311-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.115 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3135ef95-2933-4a86-94fb-cc89b78dd9f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.116 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[39908d91-b4b7-4708-a38c-5dd4f0aef65a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 systemd-machined[208080]: New machine qemu-23-instance-00000017.
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.129 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[76522983-191a-4d73-b35f-4938b6731291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.138 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f66dc9ce-bd60-4d60-8023-d44c8f9d9670]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.159 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[2363f62b-87f9-4ea6-be80-b4f02cef29d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.167 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1abd9795-dff7-4261-8163-4fd0abfea076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.1686] manager: (tap82a7f311-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.195 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e22a23-a6e7-401b-b03a-a795adaa4afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.198 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[7e85d6af-f943-453e-b6c8-6ac3e94a77fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.210 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.2210] device (tap82a7f311-f0): carrier: link connected
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.223 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d41f5f66-d79a-4ef7-b127-5f3258f9a86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.238 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b34ea90c-3dcd-4843-ac8b-8c9596da3d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429155, 'reachable_time': 27781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266849, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.250 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf69e453-d72f-4cf6-8b68-54e857828d5f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:34d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429155, 'tstamp': 429155}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266850, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.263 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[51e9a6b0-efc3-4b92-a41d-e0e4dff9b86d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429155, 'reachable_time': 27781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266851, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.285 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8818c90d-1ff8-4ffb-96ec-73dd2a155b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.336 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[eec4be9b-7e07-4036-8b2a-8257f0f975a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.340 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.340 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.340 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82a7f311-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:30 np0005605476 NetworkManager[49022]: <info>  [1770055050.3449] manager: (tap82a7f311-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.344 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 kernel: tap82a7f311-f0: entered promiscuous mode
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.347 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.348 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82a7f311-f0, col_values=(('external_ids', {'iface-id': '51e5cd2d-8b15-4de8-985f-c87fe41124e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.349 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:30Z|00212|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.350 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.352 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:57:30 np0005605476 nova_compute[239846]: 2026-02-02 17:57:30.355 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.355 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[40823059-7d70-437b-ae77-65dc3a3ff423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.358 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:57:30 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:30.358 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'env', 'PROCESS_TAG=haproxy-82a7f311-fed2-4a09-8203-270dceb25c76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82a7f311-fed2-4a09-8203-270dceb25c76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:57:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:30 np0005605476 podman[266917]: 2026-02-02 17:57:30.739372123 +0000 UTC m=+0.062883692 container create b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:57:30 np0005605476 systemd[1]: Started libpod-conmon-b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484.scope.
Feb  2 12:57:30 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:57:30 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae84ee2f3d0082862fbac0dc7ea6da53475b74a2570fadd4b3d13e23666bad2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:30 np0005605476 podman[266917]: 2026-02-02 17:57:30.713120624 +0000 UTC m=+0.036632213 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:57:30 np0005605476 podman[266917]: 2026-02-02 17:57:30.810307281 +0000 UTC m=+0.133818870 container init b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:57:30 np0005605476 podman[266917]: 2026-02-02 17:57:30.814583302 +0000 UTC m=+0.138094871 container start b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 12:57:30 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [NOTICE]   (266936) : New worker (266938) forked
Feb  2 12:57:30 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [NOTICE]   (266936) : Loading success.
Feb  2 12:57:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 330 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.8 MiB/s wr, 134 op/s
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.895 239853 DEBUG nova.compute.manager [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.896 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.896 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.896 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.896 239853 DEBUG nova.compute.manager [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Processing event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.897 239853 DEBUG nova.compute.manager [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.897 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.897 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.897 239853 DEBUG oslo_concurrency.lockutils [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.898 239853 DEBUG nova.compute.manager [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] No waiting events found dispatching network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:31 np0005605476 nova_compute[239846]: 2026-02-02 17:57:31.898 239853 WARNING nova.compute.manager [req-e44761b4-53f5-480a-93af-4e4138c00128 req-1fda2f04-5df5-4b44-9266-ca567a7f165d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received unexpected event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:57:32 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:32Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:8b:83 10.100.0.13
Feb  2 12:57:32 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:32Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:8b:83 10.100.0.13
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.000 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055053.0000873, eb6b61fa-cb2c-4e4d-be02-cdb398df790c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.001 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] VM Started (Lifecycle Event)#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.003 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.007 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.010 239853 INFO nova.virt.libvirt.driver [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Instance spawned successfully.#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.010 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.027 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.032 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.036 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.036 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.037 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.037 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.038 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.038 239853 DEBUG nova.virt.libvirt.driver [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.073 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.074 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055053.0010414, eb6b61fa-cb2c-4e4d-be02-cdb398df790c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.074 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.104 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.106 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055053.006194, eb6b61fa-cb2c-4e4d-be02-cdb398df790c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.107 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.117 239853 INFO nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Took 8.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.118 239853 DEBUG nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.130 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.132 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.171 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.189 239853 INFO nova.compute.manager [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Took 10.91 seconds to build instance.#033[00m
Feb  2 12:57:33 np0005605476 nova_compute[239846]: 2026-02-02 17:57:33.208 239853 DEBUG oslo_concurrency.lockutils [None req-f71dd4af-ee7f-438e-8424-1e265e09eb8d a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 330 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.366 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.556 239853 DEBUG nova.compute.manager [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-changed-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.557 239853 DEBUG nova.compute.manager [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Refreshing instance network info cache due to event network-changed-ac3697bb-389e-4638-84a5-0859a2819752. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.557 239853 DEBUG oslo_concurrency.lockutils [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.557 239853 DEBUG oslo_concurrency.lockutils [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:34 np0005605476 nova_compute[239846]: 2026-02-02 17:57:34.558 239853 DEBUG nova.network.neutron [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Refreshing network info cache for port ac3697bb-389e-4638-84a5-0859a2819752 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:35 np0005605476 nova_compute[239846]: 2026-02-02 17:57:35.213 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 343 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Feb  2 12:57:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:36 np0005605476 nova_compute[239846]: 2026-02-02 17:57:36.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:57:36
Feb  2 12:57:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:57:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:57:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb  2 12:57:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 344 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 721 KiB/s wr, 228 op/s
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:57:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:57:38 np0005605476 nova_compute[239846]: 2026-02-02 17:57:38.013 239853 DEBUG nova.network.neutron [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updated VIF entry in instance network info cache for port ac3697bb-389e-4638-84a5-0859a2819752. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:38 np0005605476 nova_compute[239846]: 2026-02-02 17:57:38.014 239853 DEBUG nova.network.neutron [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updating instance_info_cache with network_info: [{"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:38 np0005605476 nova_compute[239846]: 2026-02-02 17:57:38.087 239853 DEBUG oslo_concurrency.lockutils [req-418c3594-09fb-4038-966a-305c5ed62f57 req-2a446816-52d0-4244-81c6-1205813c44d3 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.369 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:39Z|00213|memory|INFO|peak resident set size grew 51% in last 1642.1 seconds, from 16128 kB to 24304 kB
Feb  2 12:57:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:39Z|00214|memory|INFO|idl-cells-OVN_Southbound:10970 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:407 lflow-cache-entries-cache-matches:294 lflow-cache-size-KB:1662 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:692 ofctrl_installed_flow_usage-KB:505 ofctrl_sb_flow_ref_usage-KB:261
Feb  2 12:57:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 348 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 606 KiB/s wr, 200 op/s
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.961 239853 DEBUG nova.compute.manager [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-changed-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.961 239853 DEBUG nova.compute.manager [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Refreshing instance network info cache due to event network-changed-0710648a-98cc-4dd5-bb88-9ea33cef69c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.962 239853 DEBUG oslo_concurrency.lockutils [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.962 239853 DEBUG oslo_concurrency.lockutils [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:39 np0005605476 nova_compute[239846]: 2026-02-02 17:57:39.962 239853 DEBUG nova.network.neutron [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Refreshing network info cache for port 0710648a-98cc-4dd5-bb88-9ea33cef69c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:40 np0005605476 nova_compute[239846]: 2026-02-02 17:57:40.214 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:40 np0005605476 nova_compute[239846]: 2026-02-02 17:57:40.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.277 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.278 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.278 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.278 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.278 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.389 239853 DEBUG nova.network.neutron [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updated VIF entry in instance network info cache for port 0710648a-98cc-4dd5-bb88-9ea33cef69c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.391 239853 DEBUG nova.network.neutron [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updating instance_info_cache with network_info: [{"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.417 239853 DEBUG oslo_concurrency.lockutils [req-cc98e50c-c06e-453c-83c7-1af4e5f58b70 req-56b20d3d-9eda-49e8-9358-133be60c9fdd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-eb6b61fa-cb2c-4e4d-be02-cdb398df790c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 348 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 625 KiB/s wr, 206 op/s
Feb  2 12:57:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633915796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.867 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.934 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.934 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.938 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.938 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.941 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.941 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.944 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:41 np0005605476 nova_compute[239846]: 2026-02-02 17:57:41.944 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.096 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.098 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3678MB free_disk=59.96572040487081GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.098 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.098 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance cf91512f-2990-45f5-9c60-7abecad4d703 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance eb6b61fa-cb2c-4e4d-be02-cdb398df790c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.266 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.266 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.376 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:42Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:4f:aa 10.100.0.11
Feb  2 12:57:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:42Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:4f:aa 10.100.0.11
Feb  2 12:57:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082226786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.907 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.913 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.930 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.948 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:57:42 np0005605476 nova_compute[239846]: 2026-02-02 17:57:42.949 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 348 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 610 KiB/s wr, 190 op/s
Feb  2 12:57:43 np0005605476 nova_compute[239846]: 2026-02-02 17:57:43.945 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:43 np0005605476 nova_compute[239846]: 2026-02-02 17:57:43.946 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:43 np0005605476 nova_compute[239846]: 2026-02-02 17:57:43.946 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:57:43 np0005605476 nova_compute[239846]: 2026-02-02 17:57:43.947 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:57:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:44Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:fb:09 10.100.0.7
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.373 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:44Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:fb:09 10.100.0.7
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.652 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.653 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.653 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.653 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.658 239853 DEBUG nova.compute.manager [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.658 239853 DEBUG nova.compute.manager [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing instance network info cache due to event network-changed-aaa7812a-02d9-4554-baab-d6a7c323f0fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.658 239853 DEBUG oslo_concurrency.lockutils [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.659 239853 DEBUG oslo_concurrency.lockutils [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.659 239853 DEBUG nova.network.neutron [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Refreshing network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.761 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.762 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.763 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.763 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.763 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.765 239853 INFO nova.compute.manager [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Terminating instance#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.766 239853 DEBUG nova.compute.manager [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:57:44 np0005605476 kernel: tapaaa7812a-02 (unregistering): left promiscuous mode
Feb  2 12:57:44 np0005605476 NetworkManager[49022]: <info>  [1770055064.8115] device (tapaaa7812a-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.822 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:44Z|00215|binding|INFO|Releasing lport aaa7812a-02d9-4554-baab-d6a7c323f0fc from this chassis (sb_readonly=0)
Feb  2 12:57:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:44Z|00216|binding|INFO|Setting lport aaa7812a-02d9-4554-baab-d6a7c323f0fc down in Southbound
Feb  2 12:57:44 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:44Z|00217|binding|INFO|Removing iface tapaaa7812a-02 ovn-installed in OVS
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.825 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.833 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.835 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:8b:83 10.100.0.13'], port_security=['fa:16:3e:81:8b:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cf91512f-2990-45f5-9c60-7abecad4d703', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=aaa7812a-02d9-4554-baab-d6a7c323f0fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.837 155391 INFO neutron.agent.ovn.metadata.agent [-] Port aaa7812a-02d9-4554-baab-d6a7c323f0fc in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.839 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac1b83e6-8e85-484a-9623-8960b1107077#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.850 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a40e1103-74cf-4ac0-9cf6-85aa8eef041c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb  2 12:57:44 np0005605476 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 11.994s CPU time.
Feb  2 12:57:44 np0005605476 systemd-machined[208080]: Machine qemu-21-instance-00000015 terminated.
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.870 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc2e8c6-d766-4ad8-a0b5-6f559a0458fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.873 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[7dca28a2-49f3-4454-be0e-ee8e736ba8b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.895 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[51c1f0db-b50e-4076-b1aa-66a263ce0d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.910 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0621dc-38b4-4f61-a6a8-f615f099966a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac1b83e6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:c7:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423963, 'reachable_time': 31310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267010, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.924 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[965c4c67-49e2-4480-9e46-f605888aacb1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423971, 'tstamp': 423971}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267011, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac1b83e6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423973, 'tstamp': 423973}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267011, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.926 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.927 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.930 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.930 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac1b83e6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.931 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.931 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac1b83e6-80, col_values=(('external_ids', {'iface-id': '25290ff2-fb45-4116-8eb3-96ed5f17945e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:44 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:44.931 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.996 239853 INFO nova.virt.libvirt.driver [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Instance destroyed successfully.#033[00m
Feb  2 12:57:44 np0005605476 nova_compute[239846]: 2026-02-02 17:57:44.997 239853 DEBUG nova.objects.instance [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid cf91512f-2990-45f5-9c60-7abecad4d703 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.034 239853 DEBUG nova.virt.libvirt.vif [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1978164842',display_name='tempest-TestVolumeBootPattern-server-1978164842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1978164842',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:57:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-djtwajn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:57:21Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=cf91512f-2990-45f5-9c60-7abecad4d703,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.035 239853 DEBUG nova.network.os_vif_util [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.035 239853 DEBUG nova.network.os_vif_util [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.035 239853 DEBUG os_vif [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.036 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.037 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaaa7812a-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.039 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.040 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.040 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.043 239853 INFO os_vif [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:8b:83,bridge_name='br-int',has_traffic_filtering=True,id=aaa7812a-02d9-4554-baab-d6a7c323f0fc,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa7812a-02')#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.217 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.244 239853 DEBUG nova.compute.manager [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-unplugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.245 239853 DEBUG oslo_concurrency.lockutils [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.245 239853 DEBUG oslo_concurrency.lockutils [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.245 239853 DEBUG oslo_concurrency.lockutils [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.245 239853 DEBUG nova.compute.manager [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] No waiting events found dispatching network-vif-unplugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.246 239853 DEBUG nova.compute.manager [req-b4209af0-4d86-46da-bbc2-8c0eeff615e4 req-e4ebbd24-9c20-4d4d-a911-c8ea92229939 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-unplugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.397 239853 INFO nova.virt.libvirt.driver [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Deleting instance files /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703_del#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.399 239853 INFO nova.virt.libvirt.driver [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Deletion of /var/lib/nova/instances/cf91512f-2990-45f5-9c60-7abecad4d703_del complete#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.450 239853 INFO nova.compute.manager [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.450 239853 DEBUG oslo.service.loopingcall [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.451 239853 DEBUG nova.compute.manager [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:57:45 np0005605476 nova_compute[239846]: 2026-02-02 17:57:45.451 239853 DEBUG nova.network.neutron [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:57:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 374 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 247 op/s
Feb  2 12:57:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:46.649 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:46.650 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:46.651 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.749 239853 DEBUG nova.network.neutron [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updated VIF entry in instance network info cache for port aaa7812a-02d9-4554-baab-d6a7c323f0fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.750 239853 DEBUG nova.network.neutron [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating instance_info_cache with network_info: [{"id": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "address": "fa:16:3e:81:8b:83", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa7812a-02", "ovs_interfaceid": "aaa7812a-02d9-4554-baab-d6a7c323f0fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.775 239853 DEBUG oslo_concurrency.lockutils [req-39818961-9c6b-4cd2-acdd-f96ec7e3a9f8 req-5e1b01d5-01c6-4eea-a5f9-2331d0cf1ef5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-cf91512f-2990-45f5-9c60-7abecad4d703" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.778 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.796 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.796 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.797 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.797 239853 DEBUG nova.network.neutron [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.798 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.816 239853 INFO nova.compute.manager [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Took 1.37 seconds to deallocate network for instance.#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.867 239853 DEBUG nova.compute.manager [req-a91dc730-32f9-42f9-ba24-184cc51c38c0 req-6953f7f8-13f1-44cf-ab66-afa1477ccd8c e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-deleted-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:46 np0005605476 nova_compute[239846]: 2026-02-02 17:57:46.971 239853 INFO nova.compute.manager [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Took 0.15 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.036 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.036 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.090 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.141 239853 DEBUG oslo_concurrency.processutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.338 239853 DEBUG nova.compute.manager [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.339 239853 DEBUG oslo_concurrency.lockutils [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.340 239853 DEBUG oslo_concurrency.lockutils [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.340 239853 DEBUG oslo_concurrency.lockutils [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.340 239853 DEBUG nova.compute.manager [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] No waiting events found dispatching network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.340 239853 WARNING nova.compute.manager [req-de82b36f-8c12-43ce-8f0d-69e2d4b3e5d5 req-9acd5650-22d6-48d2-905e-7b1c039c8313 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Received unexpected event network-vif-plugged-aaa7812a-02d9-4554-baab-d6a7c323f0fc for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007293975127351123 of space, bias 1.0, pg target 0.2188192538205337 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003065868661980723 of space, bias 1.0, pg target 0.9197605985942169 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.1278046999637786e-06 of space, bias 1.0, pg target 0.0006383414099891336 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665096716419082 of space, bias 1.0, pg target 0.19995290149257247 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.760614747747108e-07 of space, bias 4.0, pg target 0.0011712737697296529 quantized to 16 (current 16)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:57:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 418 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.3 MiB/s wr, 179 op/s
Feb  2 12:57:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534691980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.705 239853 DEBUG oslo_concurrency.processutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.710 239853 DEBUG nova.compute.provider_tree [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.727 239853 DEBUG nova.scheduler.client.report [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.749 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.776 239853 INFO nova.scheduler.client.report [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance cf91512f-2990-45f5-9c60-7abecad4d703#033[00m
Feb  2 12:57:47 np0005605476 nova_compute[239846]: 2026-02-02 17:57:47.848 239853 DEBUG oslo_concurrency.lockutils [None req-25ad00b3-d265-4093-bac7-6726585d652e d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "cf91512f-2990-45f5-9c60-7abecad4d703" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:48 np0005605476 nova_compute[239846]: 2026-02-02 17:57:48.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:48 np0005605476 nova_compute[239846]: 2026-02-02 17:57:48.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 12:57:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:57:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230250749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:57:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:57:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230250749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:57:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 433 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 175 op/s
Feb  2 12:57:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Feb  2 12:57:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Feb  2 12:57:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.039 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.218 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.256 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.256 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:57:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.775 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.775 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.856 239853 DEBUG nova.objects.instance [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'flavor' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:50 np0005605476 nova_compute[239846]: 2026-02-02 17:57:50.941 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.306 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.307 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.307 239853 INFO nova.compute.manager [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Attaching volume 83bd8689-6041-4f31-b319-c5c060772922 to /dev/vdb#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.429 239853 DEBUG nova.compute.manager [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.430 239853 DEBUG nova.compute.manager [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing instance network info cache due to event network-changed-41e29f7d-c6b6-4096-beb4-01675925dfbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.430 239853 DEBUG oslo_concurrency.lockutils [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.430 239853 DEBUG oslo_concurrency.lockutils [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.430 239853 DEBUG nova.network.neutron [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Refreshing network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.433 239853 DEBUG os_brick.utils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.435 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.445 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.445 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[e234162d-3595-4588-8332-45f322042932]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.446 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.453 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.453 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1514ee-57f3-441c-9fb9-0b52c3b3f0f3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.455 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.463 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.463 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[d80d1282-565e-4248-ab57-f22debbb2875]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.464 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[43c2278e-2a4f-457e-af4b-b2e3e217f999]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.464 239853 DEBUG oslo_concurrency.processutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.479 239853 DEBUG oslo_concurrency.processutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.481 239853 DEBUG os_brick.initiator.connectors.lightos [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.481 239853 DEBUG os_brick.initiator.connectors.lightos [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.482 239853 DEBUG os_brick.initiator.connectors.lightos [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.482 239853 DEBUG os_brick.utils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.482 239853 DEBUG nova.virt.block_device [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updating existing volume attachment record: 90afcc41-a1c7-4e7e-a992-27a2b8758620 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.590 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.591 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.591 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.591 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.591 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.593 239853 INFO nova.compute.manager [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Terminating instance#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.594 239853 DEBUG nova.compute.manager [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:57:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 431 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 9.5 MiB/s wr, 203 op/s
Feb  2 12:57:51 np0005605476 kernel: tap41e29f7d-c6 (unregistering): left promiscuous mode
Feb  2 12:57:51 np0005605476 NetworkManager[49022]: <info>  [1770055071.6543] device (tap41e29f7d-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:57:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:51Z|00218|binding|INFO|Releasing lport 41e29f7d-c6b6-4096-beb4-01675925dfbb from this chassis (sb_readonly=0)
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.660 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:51Z|00219|binding|INFO|Setting lport 41e29f7d-c6b6-4096-beb4-01675925dfbb down in Southbound
Feb  2 12:57:51 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:51Z|00220|binding|INFO|Removing iface tap41e29f7d-c6 ovn-installed in OVS
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.663 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.667 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:51.668 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:77:e4 10.100.0.7'], port_security=['fa:16:3e:f7:77:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '918af4a9-09ac-4a18-b2bd-f7ea2c0e7452', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac1b83e6-8e85-484a-9623-8960b1107077', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdfa033071c341d29a9815152416777f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b3ea3c6-b161-4d2a-b0ff-4799f10ffc02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31793217-5789-48c9-b197-953bbb5ce9ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=41e29f7d-c6b6-4096-beb4-01675925dfbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:51 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:51.669 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 41e29f7d-c6b6-4096-beb4-01675925dfbb in datapath ac1b83e6-8e85-484a-9623-8960b1107077 unbound from our chassis#033[00m
Feb  2 12:57:51 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:51.671 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac1b83e6-8e85-484a-9623-8960b1107077, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:57:51 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:51.672 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[458a4cec-0ad9-4196-8260-a2917bdb2777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:51 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:51.673 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 namespace which is not needed anymore#033[00m
Feb  2 12:57:51 np0005605476 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb  2 12:57:51 np0005605476 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 13.504s CPU time.
Feb  2 12:57:51 np0005605476 systemd-machined[208080]: Machine qemu-20-instance-00000014 terminated.
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [NOTICE]   (265228) : haproxy version is 2.8.14-c23fe91
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [NOTICE]   (265228) : path to executable is /usr/sbin/haproxy
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [WARNING]  (265228) : Exiting Master process...
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [WARNING]  (265228) : Exiting Master process...
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [ALERT]    (265228) : Current worker (265230) exited with code 143 (Terminated)
Feb  2 12:57:51 np0005605476 neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077[265224]: [WARNING]  (265228) : All workers exited. Exiting... (0)
Feb  2 12:57:51 np0005605476 systemd[1]: libpod-d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66.scope: Deactivated successfully.
Feb  2 12:57:51 np0005605476 podman[267096]: 2026-02-02 17:57:51.807115119 +0000 UTC m=+0.047913440 container died d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.829 239853 INFO nova.virt.libvirt.driver [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Instance destroyed successfully.#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.830 239853 DEBUG nova.objects.instance [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lazy-loading 'resources' on Instance uuid 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66-userdata-shm.mount: Deactivated successfully.
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.843 239853 DEBUG nova.virt.libvirt.vif [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2926141',display_name='tempest-TestVolumeBootPattern-server-2926141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2926141',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO4ulf/RwecVzW3ozXNM5la5CsG9wsK3YFcQ5CoZoldFz5UABUexFBTfDuQoCuLTpWgwuBAQ+iUOHcJ28XAmlAq9MhX8vbUIjdWGNKpxQLSxAUQDHqD6Nda3hRaVYYTSVw==',key_name='tempest-TestVolumeBootPattern-1750914228',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:56:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdfa033071c341d29a9815152416777f',ramdisk_id='',reservation_id='r-85ag0g5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1185251615',owner_user_name='tempest-TestVolumeBootPattern-1185251615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:56:40Z,user_data=None,user_id='d7b8ea09739a4455840062f2ad81089a',uuid=918af4a9-09ac-4a18-b2bd-f7ea2c0e7452,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.843 239853 DEBUG nova.network.os_vif_util [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converting VIF {"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.844 239853 DEBUG nova.network.os_vif_util [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.844 239853 DEBUG os_vif [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:57:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-66738f997b7f688c12dcc960f89abfddde894243b4e42345235bf9de6fb520b2-merged.mount: Deactivated successfully.
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.846 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.846 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41e29f7d-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.888 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.889 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:51 np0005605476 nova_compute[239846]: 2026-02-02 17:57:51.891 239853 INFO os_vif [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:77:e4,bridge_name='br-int',has_traffic_filtering=True,id=41e29f7d-c6b6-4096-beb4-01675925dfbb,network=Network(ac1b83e6-8e85-484a-9623-8960b1107077),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41e29f7d-c6')#033[00m
Feb  2 12:57:52 np0005605476 podman[267096]: 2026-02-02 17:57:52.09081722 +0000 UTC m=+0.331615521 container cleanup d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:57:52 np0005605476 systemd[1]: libpod-conmon-d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66.scope: Deactivated successfully.
Feb  2 12:57:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:57:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127388083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:57:52 np0005605476 podman[267155]: 2026-02-02 17:57:52.214858494 +0000 UTC m=+0.106623734 container remove d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.218 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[15fef0b2-2872-4ff2-b985-1b7bd8ceb465]: (4, ('Mon Feb  2 05:57:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66)\nd52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66\nMon Feb  2 05:57:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 (d52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66)\nd52fc519eedf647a44f274e17e9c5163510cfa9861c875f9184e1fc892931a66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.220 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d21e6979-9b05-47c1-b646-c08cbd2d4f5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.220 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac1b83e6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.222 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:52 np0005605476 kernel: tapac1b83e6-80: left promiscuous mode
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.224 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.226 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cf46adc6-6105-4354-bea5-7c03a1bd8582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.234 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.239 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8c265c49-0763-42c9-853c-799e85361702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.240 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3d4bbf-19df-4475-bcf4-c6e10b7d039a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.250 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cb24f1-3154-4c16-bcd3-7743f15c4b28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423958, 'reachable_time': 28804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267171, 'error': None, 'target': 'ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.252 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac1b83e6-8e85-484a-9623-8960b1107077 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:57:52 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:52.252 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe5b6c-7534-44a3-9db2-04ac31d74914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:52 np0005605476 systemd[1]: run-netns-ovnmeta\x2dac1b83e6\x2d8e85\x2d484a\x2d9623\x2d8960b1107077.mount: Deactivated successfully.
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.276 239853 INFO nova.virt.libvirt.driver [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Deleting instance files /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_del#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.277 239853 INFO nova.virt.libvirt.driver [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Deletion of /var/lib/nova/instances/918af4a9-09ac-4a18-b2bd-f7ea2c0e7452_del complete#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.434 239853 INFO nova.compute.manager [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.434 239853 DEBUG oslo.service.loopingcall [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.434 239853 DEBUG nova.compute.manager [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.435 239853 DEBUG nova.network.neutron [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.547 239853 DEBUG os_brick.encryptors [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a1c10faa-a29a-4160-90ec-f7e7216d397f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-83bd8689-6041-4f31-b319-c5c060772922', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '83bd8689-6041-4f31-b319-c5c060772922', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7fdaddd-b417-4d8e-a3d7-a7132f04c7bf', 'attached_at': '', 'detached_at': '', 'volume_id': '83bd8689-6041-4f31-b319-c5c060772922', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.551 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.565 239853 DEBUG barbicanclient.v1.secrets [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.565 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.591 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.592 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.611 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.612 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.631 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.632 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.659 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.660 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.687 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.687 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.712 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.713 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.737 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.737 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.757 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.758 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.773 239853 DEBUG nova.network.neutron [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updated VIF entry in instance network info cache for port 41e29f7d-c6b6-4096-beb4-01675925dfbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.774 239853 DEBUG nova.network.neutron [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [{"id": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "address": "fa:16:3e:f7:77:e4", "network": {"id": "ac1b83e6-8e85-484a-9623-8960b1107077", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1318481822-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdfa033071c341d29a9815152416777f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41e29f7d-c6", "ovs_interfaceid": "41e29f7d-c6b6-4096-beb4-01675925dfbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.784 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.784 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.794 239853 DEBUG oslo_concurrency.lockutils [req-dca2576c-fe2a-4523-962f-8779195a5140 req-3828f239-2bd3-4b3c-b5bb-4df325fd23bf e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.812 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.812 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.832 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.832 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.877 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.877 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.900 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.901 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.924 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.925 239853 INFO barbicanclient.base [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/a1c10faa-a29a-4160-90ec-f7e7216d397f#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.951 239853 DEBUG barbicanclient.client [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.951 239853 DEBUG nova.virt.libvirt.host [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:    <volume>83bd8689-6041-4f31-b319-c5c060772922</volume>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:57:52 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:57:52 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.962 239853 DEBUG nova.objects.instance [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'flavor' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.986 239853 DEBUG nova.virt.libvirt.driver [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Attempting to attach volume 83bd8689-6041-4f31-b319-c5c060772922 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 12:57:52 np0005605476 nova_compute[239846]: 2026-02-02 17:57:52.988 239853 DEBUG nova.virt.libvirt.guest [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-83bd8689-6041-4f31-b319-c5c060772922">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  </auth>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <serial>83bd8689-6041-4f31-b319-c5c060772922</serial>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:57:52 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="bea147aa-8ee7-4106-bb82-caab816b88c2"/>
Feb  2 12:57:52 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:57:52 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:57:52 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.262 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 12:57:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 431 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 9.5 MiB/s wr, 203 op/s
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.658 239853 DEBUG nova.network.neutron [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:53.664 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.664 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:53 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:53.665 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.686 239853 INFO nova.compute.manager [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Took 1.25 seconds to deallocate network for instance.#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.743 239853 DEBUG nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-unplugged-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] No waiting events found dispatching network-vif-unplugged-41e29f7d-c6b6-4096-beb4-01675925dfbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-unplugged-41e29f7d-c6b6-4096-beb4-01675925dfbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.744 239853 DEBUG nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.745 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.745 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.745 239853 DEBUG oslo_concurrency.lockutils [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.745 239853 DEBUG nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] No waiting events found dispatching network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.745 239853 WARNING nova.compute.manager [req-3aaa8a98-9145-4c74-8dc8-b88aebefcde7 req-ecb085bf-aaf5-49d8-8acc-1f65f7a981fe e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received unexpected event network-vif-plugged-41e29f7d-c6b6-4096-beb4-01675925dfbb for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.747 239853 DEBUG nova.compute.manager [req-1101dc40-2145-4249-8404-ef2f7f44765a req-b69fa21e-7f5f-4c41-b6f3-c0f582616ed8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Received event network-vif-deleted-41e29f7d-c6b6-4096-beb4-01675925dfbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:53 np0005605476 nova_compute[239846]: 2026-02-02 17:57:53.855 239853 INFO nova.compute.manager [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.046 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.047 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.123 239853 DEBUG oslo_concurrency.processutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:54 np0005605476 podman[267213]: 2026-02-02 17:57:54.609141513 +0000 UTC m=+0.062723318 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 12:57:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837181102' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.635 239853 DEBUG oslo_concurrency.processutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.642 239853 DEBUG nova.compute.provider_tree [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.656 239853 DEBUG nova.scheduler.client.report [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.673 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.699 239853 INFO nova.scheduler.client.report [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Deleted allocations for instance 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452#033[00m
Feb  2 12:57:54 np0005605476 nova_compute[239846]: 2026-02-02 17:57:54.754 239853 DEBUG oslo_concurrency.lockutils [None req-c92e500f-0300-416b-8b95-c657cec75ba0 d7b8ea09739a4455840062f2ad81089a cdfa033071c341d29a9815152416777f - - default default] Lock "918af4a9-09ac-4a18-b2bd-f7ea2c0e7452" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.220 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.462 239853 DEBUG nova.virt.libvirt.driver [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.462 239853 DEBUG nova.virt.libvirt.driver [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.463 239853 DEBUG nova.virt.libvirt.driver [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.463 239853 DEBUG nova.virt.libvirt.driver [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No VIF found with MAC fa:16:3e:7b:4f:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.595 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.596 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.596 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.596 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.596 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.597 239853 INFO nova.compute.manager [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Terminating instance#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.598 239853 DEBUG nova.compute.manager [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:57:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 431 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 5.8 MiB/s wr, 170 op/s
Feb  2 12:57:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:57:55 np0005605476 kernel: tap0710648a-98 (unregistering): left promiscuous mode
Feb  2 12:57:55 np0005605476 NetworkManager[49022]: <info>  [1770055075.6575] device (tap0710648a-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:57:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:55Z|00221|binding|INFO|Releasing lport 0710648a-98cc-4dd5-bb88-9ea33cef69c2 from this chassis (sb_readonly=0)
Feb  2 12:57:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:55Z|00222|binding|INFO|Setting lport 0710648a-98cc-4dd5-bb88-9ea33cef69c2 down in Southbound
Feb  2 12:57:55 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:55Z|00223|binding|INFO|Removing iface tap0710648a-98 ovn-installed in OVS
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.701 239853 DEBUG oslo_concurrency.lockutils [None req-415fbcbd-c226-4248-b634-e0b81c97b298 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.702 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.708 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.709 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:fb:09 10.100.0.7'], port_security=['fa:16:3e:e0:fb:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eb6b61fa-cb2c-4e4d-be02-cdb398df790c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed4d424-2957-4e57-bfeb-8d8148412d60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=0710648a-98cc-4dd5-bb88-9ea33cef69c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.710 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 0710648a-98cc-4dd5-bb88-9ea33cef69c2 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 unbound from our chassis#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.712 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82a7f311-fed2-4a09-8203-270dceb25c76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.713 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebdfe1e-dbef-4b00-bd9f-ce630f1ee974]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.713 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace which is not needed anymore#033[00m
Feb  2 12:57:55 np0005605476 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb  2 12:57:55 np0005605476 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 14.743s CPU time.
Feb  2 12:57:55 np0005605476 systemd-machined[208080]: Machine qemu-23-instance-00000017 terminated.
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [NOTICE]   (266936) : haproxy version is 2.8.14-c23fe91
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [NOTICE]   (266936) : path to executable is /usr/sbin/haproxy
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [WARNING]  (266936) : Exiting Master process...
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [WARNING]  (266936) : Exiting Master process...
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [ALERT]    (266936) : Current worker (266938) exited with code 143 (Terminated)
Feb  2 12:57:55 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[266932]: [WARNING]  (266936) : All workers exited. Exiting... (0)
Feb  2 12:57:55 np0005605476 systemd[1]: libpod-b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484.scope: Deactivated successfully.
Feb  2 12:57:55 np0005605476 podman[267256]: 2026-02-02 17:57:55.819754731 +0000 UTC m=+0.042729735 container died b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.823 239853 INFO nova.virt.libvirt.driver [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Instance destroyed successfully.#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.824 239853 DEBUG nova.objects.instance [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'resources' on Instance uuid eb6b61fa-cb2c-4e4d-be02-cdb398df790c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.837 239853 DEBUG nova.virt.libvirt.vif [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:57:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-681301186',display_name='tempest-TransferEncryptedVolumeTest-server-681301186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-681301186',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:57:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-3q91ajip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:57:33Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=eb6b61fa-cb2c-4e4d-be02-cdb398df790c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.838 239853 DEBUG nova.network.os_vif_util [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "address": "fa:16:3e:e0:fb:09", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0710648a-98", "ovs_interfaceid": "0710648a-98cc-4dd5-bb88-9ea33cef69c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.839 239853 DEBUG nova.network.os_vif_util [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.839 239853 DEBUG os_vif [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.841 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.841 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0710648a-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.843 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.845 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.847 239853 INFO os_vif [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:fb:09,bridge_name='br-int',has_traffic_filtering=True,id=0710648a-98cc-4dd5-bb88-9ea33cef69c2,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0710648a-98')#033[00m
Feb  2 12:57:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484-userdata-shm.mount: Deactivated successfully.
Feb  2 12:57:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3ae84ee2f3d0082862fbac0dc7ea6da53475b74a2570fadd4b3d13e23666bad2-merged.mount: Deactivated successfully.
Feb  2 12:57:55 np0005605476 podman[267256]: 2026-02-02 17:57:55.859616474 +0000 UTC m=+0.082591478 container cleanup b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:57:55 np0005605476 systemd[1]: libpod-conmon-b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484.scope: Deactivated successfully.
Feb  2 12:57:55 np0005605476 podman[267304]: 2026-02-02 17:57:55.919631324 +0000 UTC m=+0.041769017 container remove b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.924 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3a847b-3f34-4927-9fb1-c496df800b6d]: (4, ('Mon Feb  2 05:57:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484)\nb2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484\nMon Feb  2 05:57:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (b2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484)\nb2b946f22dfcf7a1d86b60a62c8c22f57c4e6c92846895f056ff69fd8bd67484\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.926 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[087d8881-273d-4fd3-9117-a6b3aae33fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.927 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.928 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 kernel: tap82a7f311-f0: left promiscuous mode
Feb  2 12:57:55 np0005605476 nova_compute[239846]: 2026-02-02 17:57:55.935 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.937 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[07fe51f6-4e91-46ef-be83-1c62d02de5be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.958 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4f892d-30c7-4c55-bc19-fea54b1d06e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.960 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc80ef2a-d594-4b81-99a6-dc8cf0e18abc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.972 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2a831e5d-5132-435b-9ff1-5ea75ce8ea0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429148, 'reachable_time': 34294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267330, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:55 np0005605476 systemd[1]: run-netns-ovnmeta\x2d82a7f311\x2dfed2\x2d4a09\x2d8203\x2d270dceb25c76.mount: Deactivated successfully.
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.975 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:57:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:55.976 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[2f90f0b3-2cda-4bcf-9db5-b69430ee9ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.008 239853 INFO nova.virt.libvirt.driver [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Deleting instance files /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c_del#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.009 239853 INFO nova.virt.libvirt.driver [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Deletion of /var/lib/nova/instances/eb6b61fa-cb2c-4e4d-be02-cdb398df790c_del complete#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.068 239853 INFO nova.compute.manager [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.068 239853 DEBUG oslo.service.loopingcall [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.069 239853 DEBUG nova.compute.manager [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.069 239853 DEBUG nova.network.neutron [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.402 239853 DEBUG oslo_concurrency.lockutils [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.403 239853 DEBUG oslo_concurrency.lockutils [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.423 239853 INFO nova.compute.manager [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Detaching volume 83bd8689-6041-4f31-b319-c5c060772922#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.605 239853 INFO nova.virt.block_device [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Attempting to driver detach volume 83bd8689-6041-4f31-b319-c5c060772922 from mountpoint /dev/vdb#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.659 239853 DEBUG nova.compute.manager [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-unplugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.659 239853 DEBUG oslo_concurrency.lockutils [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.660 239853 DEBUG oslo_concurrency.lockutils [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.660 239853 DEBUG oslo_concurrency.lockutils [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.660 239853 DEBUG nova.compute.manager [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] No waiting events found dispatching network-vif-unplugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.661 239853 DEBUG nova.compute.manager [req-cd49eed0-7442-40be-94d6-03fc31b0ad2d req-e9b4b3f6-74f8-4e4b-aa32-27442bdc1e7f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-unplugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.762 239853 DEBUG os_brick.encryptors [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a1c10faa-a29a-4160-90ec-f7e7216d397f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-83bd8689-6041-4f31-b319-c5c060772922', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '83bd8689-6041-4f31-b319-c5c060772922', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7fdaddd-b417-4d8e-a3d7-a7132f04c7bf', 'attached_at': '', 'detached_at': '', 'volume_id': '83bd8689-6041-4f31-b319-c5c060772922', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.769 239853 DEBUG nova.virt.libvirt.driver [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Attempting to detach device vdb from instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.770 239853 DEBUG nova.virt.libvirt.guest [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-83bd8689-6041-4f31-b319-c5c060772922">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <serial>83bd8689-6041-4f31-b319-c5c060772922</serial>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="bea147aa-8ee7-4106-bb82-caab816b88c2"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:57:56 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:57:56 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.777 239853 INFO nova.virt.libvirt.driver [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully detached device vdb from instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf from the persistent domain config.#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.778 239853 DEBUG nova.virt.libvirt.driver [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.778 239853 DEBUG nova.virt.libvirt.guest [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-83bd8689-6041-4f31-b319-c5c060772922">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  </source>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <serial>83bd8689-6041-4f31-b319-c5c060772922</serial>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  <encryption format="luks">
Feb  2 12:57:56 np0005605476 nova_compute[239846]:    <secret type="passphrase" uuid="bea147aa-8ee7-4106-bb82-caab816b88c2"/>
Feb  2 12:57:56 np0005605476 nova_compute[239846]:  </encryption>
Feb  2 12:57:56 np0005605476 nova_compute[239846]: </disk>
Feb  2 12:57:56 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.871 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770055076.8711252, d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.872 239853 DEBUG nova.virt.libvirt.driver [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 12:57:56 np0005605476 nova_compute[239846]: 2026-02-02 17:57:56.874 239853 INFO nova.virt.libvirt.driver [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully detached device vdb from instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf from the live domain config.#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.042 239853 DEBUG nova.objects.instance [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'flavor' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.081 239853 DEBUG oslo_concurrency.lockutils [None req-debc1e83-d0d7-4deb-ad8d-fe5627404183 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.122 239853 DEBUG nova.network.neutron [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.139 239853 INFO nova.compute.manager [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Took 1.07 seconds to deallocate network for instance.#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.331 239853 INFO nova.compute.manager [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.388 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.388 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:57 np0005605476 nova_compute[239846]: 2026-02-02 17:57:57.445 239853 DEBUG oslo_concurrency.processutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 431 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Feb  2 12:57:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:57:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4152218478' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:57:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:57:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4152218478' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.024 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.025 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.025 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.025 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.025 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.027 239853 INFO nova.compute.manager [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Terminating instance#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.028 239853 DEBUG nova.compute.manager [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350529188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.070 239853 DEBUG oslo_concurrency.processutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:57:58 np0005605476 kernel: tapac3697bb-38 (unregistering): left promiscuous mode
Feb  2 12:57:58 np0005605476 NetworkManager[49022]: <info>  [1770055078.0801] device (tapac3697bb-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.078 239853 DEBUG nova.compute.provider_tree [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.085 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:58Z|00224|binding|INFO|Releasing lport ac3697bb-389e-4638-84a5-0859a2819752 from this chassis (sb_readonly=0)
Feb  2 12:57:58 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:58Z|00225|binding|INFO|Setting lport ac3697bb-389e-4638-84a5-0859a2819752 down in Southbound
Feb  2 12:57:58 np0005605476 ovn_controller[146041]: 2026-02-02T17:57:58Z|00226|binding|INFO|Removing iface tapac3697bb-38 ovn-installed in OVS
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.092 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:4f:aa 10.100.0.11'], port_security=['fa:16:3e:7b:4f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd7fdaddd-b417-4d8e-a3d7-a7132f04c7bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3aa6d590-93b7-4292-90fc-74a1afc66cb3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=ac3697bb-389e-4638-84a5-0859a2819752) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.093 155391 INFO neutron.agent.ovn.metadata.agent [-] Port ac3697bb-389e-4638-84a5-0859a2819752 in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec unbound from our chassis#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.095 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bad2c851-1c12-4a83-9873-6096fe5f4eec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.097 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.096 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2f41b94b-0a34-4c2c-9ee5-10812ad01b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.097 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace which is not needed anymore#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.101 239853 DEBUG nova.scheduler.client.report [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.122 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb  2 12:57:58 np0005605476 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 14.785s CPU time.
Feb  2 12:57:58 np0005605476 systemd-machined[208080]: Machine qemu-22-instance-00000016 terminated.
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.158 239853 INFO nova.scheduler.client.report [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Deleted allocations for instance eb6b61fa-cb2c-4e4d-be02-cdb398df790c#033[00m
Feb  2 12:57:58 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [NOTICE]   (266741) : haproxy version is 2.8.14-c23fe91
Feb  2 12:57:58 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [NOTICE]   (266741) : path to executable is /usr/sbin/haproxy
Feb  2 12:57:58 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [WARNING]  (266741) : Exiting Master process...
Feb  2 12:57:58 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [ALERT]    (266741) : Current worker (266747) exited with code 143 (Terminated)
Feb  2 12:57:58 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[266720]: [WARNING]  (266741) : All workers exited. Exiting... (0)
Feb  2 12:57:58 np0005605476 systemd[1]: libpod-14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a.scope: Deactivated successfully.
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:57:58 np0005605476 podman[267460]: 2026-02-02 17:57:58.213039852 +0000 UTC m=+0.040016818 container died 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:57:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.238 239853 DEBUG oslo_concurrency.lockutils [None req-9d3ab325-fbef-46cc-bbd1-ac7b334bd16f a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a-userdata-shm.mount: Deactivated successfully.
Feb  2 12:57:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2c7c27ecb09f5118830249c5fadcad2cd0e2fefca708875e0246a5d631a2dd70-merged.mount: Deactivated successfully.
Feb  2 12:57:58 np0005605476 NetworkManager[49022]: <info>  [1770055078.2522] manager: (tapac3697bb-38): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Feb  2 12:57:58 np0005605476 podman[267460]: 2026-02-02 17:57:58.25272336 +0000 UTC m=+0.079700306 container cleanup 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.252 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.256 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 systemd[1]: libpod-conmon-14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a.scope: Deactivated successfully.
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.267 239853 INFO nova.virt.libvirt.driver [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Instance destroyed successfully.#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.268 239853 DEBUG nova.objects.instance [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'resources' on Instance uuid d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.281 239853 DEBUG nova.virt.libvirt.vif [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:57:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2072601040',display_name='tempest-TestEncryptedCinderVolumes-server-2072601040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2072601040',id=22,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCREanuEFPYE+eF4ceZLxhPDOcYQXJ3siOHiQQjA0XJeV9gs5eVNtGx+kCBb/xcJWUCobFqLGNuv1eGmJgYbbAp95zZtxlyFHNp8ldg9W1Yueybe1fM3snSM6n8XagKdBA==',key_name='tempest-keypair-1953777832',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:57:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-yn0fkm02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:57:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.281 239853 DEBUG nova.network.os_vif_util [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "ac3697bb-389e-4638-84a5-0859a2819752", "address": "fa:16:3e:7b:4f:aa", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac3697bb-38", "ovs_interfaceid": "ac3697bb-389e-4638-84a5-0859a2819752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.282 239853 DEBUG nova.network.os_vif_util [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.282 239853 DEBUG os_vif [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.283 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.284 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac3697bb-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.285 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.288 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.291 239853 INFO os_vif [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:4f:aa,bridge_name='br-int',has_traffic_filtering=True,id=ac3697bb-389e-4638-84a5-0859a2819752,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac3697bb-38')#033[00m
Feb  2 12:57:58 np0005605476 podman[267520]: 2026-02-02 17:57:58.319232823 +0000 UTC m=+0.047907330 container remove 14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.324 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7197f5bd-e817-4c84-a98a-67cc0f6180fc]: (4, ('Mon Feb  2 05:57:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a)\n14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a\nMon Feb  2 05:57:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a)\n14b43245d3345d27d7305a1e5b6eb705b1c7e95b84a403ba18f650e51d9b8a9a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.327 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a13302-1321-4db2-9e77-6c329458a215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.328 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.330 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 kernel: tapbad2c851-10: left promiscuous mode
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.337 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.338 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf978913-8c35-4f46-b929-9b2ae91920fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.350 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4e3a16-303e-444f-aff6-95c5a1519e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.351 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d82a141-26ef-4259-942f-76176ba0e404]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.366 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[38b65a42-0aab-48ca-834d-e94d21df2743]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428951, 'reachable_time': 34688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267583, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 systemd[1]: run-netns-ovnmeta\x2dbad2c851\x2d1c12\x2d4a83\x2d9873\x2d6096fe5f4eec.mount: Deactivated successfully.
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.370 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:57:58 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:57:58.371 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[fe43251b-b516-43e0-9ab9-ae8cace0f7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.513 239853 INFO nova.virt.libvirt.driver [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Deleting instance files /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_del#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.513 239853 INFO nova.virt.libvirt.driver [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Deletion of /var/lib/nova/instances/d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf_del complete#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.563 239853 INFO nova.compute.manager [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.564 239853 DEBUG oslo.service.loopingcall [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.564 239853 DEBUG nova.compute.manager [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.564 239853 DEBUG nova.network.neutron [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.621875647 +0000 UTC m=+0.033805222 container create 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 12:57:58 np0005605476 systemd[1]: Started libpod-conmon-3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62.scope.
Feb  2 12:57:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.68483871 +0000 UTC m=+0.096768305 container init 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.689834651 +0000 UTC m=+0.101764216 container start 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 12:57:58 np0005605476 sharp_meitner[267613]: 167 167
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.694001818 +0000 UTC m=+0.105931423 container attach 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:57:58 np0005605476 systemd[1]: libpod-3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62.scope: Deactivated successfully.
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.694658247 +0000 UTC m=+0.106587822 container died 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.605988119 +0000 UTC m=+0.017917724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:57:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f7f1f9f86e5be9d51d5e5afb6a812a7d803276141400e32a97c7381052348cec-merged.mount: Deactivated successfully.
Feb  2 12:57:58 np0005605476 podman[267597]: 2026-02-02 17:57:58.723778057 +0000 UTC m=+0.135707632 container remove 3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:57:58 np0005605476 systemd[1]: libpod-conmon-3ee1202a59d6bc186b22dfa66369fd74d7564d612e020bc3444327d926cbcf62.scope: Deactivated successfully.
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.746 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.747 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.747 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.747 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "eb6b61fa-cb2c-4e4d-be02-cdb398df790c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.748 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] No waiting events found dispatching network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.748 239853 WARNING nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received unexpected event network-vif-plugged-0710648a-98cc-4dd5-bb88-9ea33cef69c2 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.748 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Received event network-vif-deleted-0710648a-98cc-4dd5-bb88-9ea33cef69c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.749 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-unplugged-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.749 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.749 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.749 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.750 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] No waiting events found dispatching network-vif-unplugged-ac3697bb-389e-4638-84a5-0859a2819752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.750 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-unplugged-ac3697bb-389e-4638-84a5-0859a2819752 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.750 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.750 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.750 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.751 239853 DEBUG oslo_concurrency.lockutils [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.751 239853 DEBUG nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] No waiting events found dispatching network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:57:58 np0005605476 nova_compute[239846]: 2026-02-02 17:57:58.751 239853 WARNING nova.compute.manager [req-944f6ef6-7481-408c-ab84-a48167572d52 req-bb8bafe3-3727-466b-8116-31356898c3e1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received unexpected event network-vif-plugged-ac3697bb-389e-4638-84a5-0859a2819752 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 12:57:58 np0005605476 podman[267638]: 2026-02-02 17:57:58.83962372 +0000 UTC m=+0.035580513 container create 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 12:57:58 np0005605476 systemd[1]: Started libpod-conmon-577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2.scope.
Feb  2 12:57:58 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:57:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:58 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:57:58 np0005605476 podman[267638]: 2026-02-02 17:57:58.890920595 +0000 UTC m=+0.086877418 container init 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 12:57:58 np0005605476 podman[267638]: 2026-02-02 17:57:58.900424423 +0000 UTC m=+0.096381216 container start 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:57:58 np0005605476 podman[267638]: 2026-02-02 17:57:58.904506868 +0000 UTC m=+0.100463661 container attach 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:57:58 np0005605476 podman[267638]: 2026-02-02 17:57:58.825013559 +0000 UTC m=+0.020970382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:57:58 np0005605476 podman[267649]: 2026-02-02 17:57:58.944920806 +0000 UTC m=+0.078462961 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:57:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:57:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:57:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:57:59 np0005605476 cool_wilbur[267654]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:57:59 np0005605476 cool_wilbur[267654]: --> All data devices are unavailable
Feb  2 12:57:59 np0005605476 systemd[1]: libpod-577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2.scope: Deactivated successfully.
Feb  2 12:57:59 np0005605476 podman[267701]: 2026-02-02 17:57:59.356371225 +0000 UTC m=+0.024081859 container died 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 12:57:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7eb26e9d408fdf73902760aa076bf1dbf85ada7dfd52cd352af79e4a0e5af76c-merged.mount: Deactivated successfully.
Feb  2 12:57:59 np0005605476 podman[267701]: 2026-02-02 17:57:59.395308592 +0000 UTC m=+0.063019206 container remove 577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wilbur, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 12:57:59 np0005605476 systemd[1]: libpod-conmon-577b8d4fe30d8313da121d3c136e308a6f2865ce1474452f1c5a7f4d555d69a2.scope: Deactivated successfully.
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.479 239853 DEBUG nova.network.neutron [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.505 239853 INFO nova.compute.manager [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.553 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.555 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:57:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 312 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 37 KiB/s wr, 93 op/s
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.645 239853 DEBUG oslo_concurrency.processutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.815109796 +0000 UTC m=+0.036682654 container create 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:57:59 np0005605476 systemd[1]: Started libpod-conmon-556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91.scope.
Feb  2 12:57:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.873323956 +0000 UTC m=+0.094896834 container init 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.878505472 +0000 UTC m=+0.100078320 container start 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.881713102 +0000 UTC m=+0.103285970 container attach 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:57:59 np0005605476 friendly_sammet[267813]: 167 167
Feb  2 12:57:59 np0005605476 systemd[1]: libpod-556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91.scope: Deactivated successfully.
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.882892456 +0000 UTC m=+0.104465314 container died 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.799168797 +0000 UTC m=+0.020741685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:57:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3eaf7d8d1c74484dcbca7bed8ff26ad1a93f3e1ae8c8478353feff4f9a8f3718-merged.mount: Deactivated successfully.
Feb  2 12:57:59 np0005605476 podman[267797]: 2026-02-02 17:57:59.91252919 +0000 UTC m=+0.134102048 container remove 556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 12:57:59 np0005605476 systemd[1]: libpod-conmon-556a635dcb3817fcb887f7a53680a27a8f5db16a67db6696573dc9abcd3e8a91.scope: Deactivated successfully.
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.995 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055064.9944227, cf91512f-2990-45f5-9c60-7abecad4d703 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:57:59 np0005605476 nova_compute[239846]: 2026-02-02 17:57:59.996 239853 INFO nova.compute.manager [-] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.018 239853 DEBUG nova.compute.manager [None req-78f6014b-b0a3-4e84-82a1-59a1746fd265 - - - - - -] [instance: cf91512f-2990-45f5-9c60-7abecad4d703] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.027003335 +0000 UTC m=+0.035838811 container create 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 12:58:00 np0005605476 systemd[1]: Started libpod-conmon-6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d.scope.
Feb  2 12:58:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:58:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38236b475706e4854bab7525fdfabd005b43f6fe081d154e5b15d8be8fa42bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38236b475706e4854bab7525fdfabd005b43f6fe081d154e5b15d8be8fa42bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38236b475706e4854bab7525fdfabd005b43f6fe081d154e5b15d8be8fa42bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:00 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38236b475706e4854bab7525fdfabd005b43f6fe081d154e5b15d8be8fa42bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.10031991 +0000 UTC m=+0.109155416 container init 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.105457754 +0000 UTC m=+0.114293230 container start 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.011696343 +0000 UTC m=+0.020531839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.109141628 +0000 UTC m=+0.117977114 container attach 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284814345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.226 239853 DEBUG oslo_concurrency.processutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.267 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.272 239853 DEBUG nova.compute.provider_tree [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.288 239853 DEBUG nova.scheduler.client.report [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.309 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.338 239853 INFO nova.scheduler.client.report [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Deleted allocations for instance d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf#033[00m
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]: {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    "0": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "devices": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "/dev/loop3"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            ],
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_name": "ceph_lv0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_size": "21470642176",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "name": "ceph_lv0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "tags": {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_name": "ceph",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.crush_device_class": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.encrypted": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.objectstore": "bluestore",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_id": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.vdo": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.with_tpm": "0"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            },
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "vg_name": "ceph_vg0"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        }
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    ],
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    "1": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "devices": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "/dev/loop4"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            ],
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_name": "ceph_lv1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_size": "21470642176",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "name": "ceph_lv1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "tags": {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_name": "ceph",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.crush_device_class": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.encrypted": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.objectstore": "bluestore",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_id": "1",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.vdo": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.with_tpm": "0"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            },
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "vg_name": "ceph_vg1"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        }
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    ],
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    "2": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "devices": [
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "/dev/loop5"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            ],
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_name": "ceph_lv2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_size": "21470642176",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "name": "ceph_lv2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "tags": {
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.cluster_name": "ceph",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.crush_device_class": "",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.encrypted": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.objectstore": "bluestore",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osd_id": "2",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.vdo": "0",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:                "ceph.with_tpm": "0"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            },
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "type": "block",
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:            "vg_name": "ceph_vg2"
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:        }
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]:    ]
Feb  2 12:58:00 np0005605476 thirsty_williams[267851]: }
Feb  2 12:58:00 np0005605476 systemd[1]: libpod-6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d.scope: Deactivated successfully.
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.380911713 +0000 UTC m=+0.389747209 container died 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.403 239853 DEBUG oslo_concurrency.lockutils [None req-16d8f4da-cbad-4ae2-b3b4-efcee463a83d c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f38236b475706e4854bab7525fdfabd005b43f6fe081d154e5b15d8be8fa42bc-merged.mount: Deactivated successfully.
Feb  2 12:58:00 np0005605476 podman[267835]: 2026-02-02 17:58:00.419974373 +0000 UTC m=+0.428809849 container remove 6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:58:00 np0005605476 systemd[1]: libpod-conmon-6404d33bedb821e141ba9591c2b9462efab66cc3cbe36191447ecca70e0c3c9d.scope: Deactivated successfully.
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Feb  2 12:58:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Feb  2 12:58:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:00.667 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.811748468 +0000 UTC m=+0.037485597 container create 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 12:58:00 np0005605476 nova_compute[239846]: 2026-02-02 17:58:00.819 239853 DEBUG nova.compute.manager [req-ab2db49b-5e49-4594-96a2-cb45ee3b6f1a req-2398b779-6b73-4dd5-b903-ec40abd0c96b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Received event network-vif-deleted-ac3697bb-389e-4638-84a5-0859a2819752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:00 np0005605476 systemd[1]: Started libpod-conmon-4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed.scope.
Feb  2 12:58:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.872650224 +0000 UTC m=+0.098387373 container init 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.877892941 +0000 UTC m=+0.103630070 container start 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:58:00 np0005605476 magical_lovelace[267952]: 167 167
Feb  2 12:58:00 np0005605476 systemd[1]: libpod-4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed.scope: Deactivated successfully.
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.883279033 +0000 UTC m=+0.109016182 container attach 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.883796138 +0000 UTC m=+0.109533257 container died 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.796787777 +0000 UTC m=+0.022524936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:58:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-a3dec43a14f5bccedec9805ed0181ac51799867a4d644dce09bf5e44d4ef1c44-merged.mount: Deactivated successfully.
Feb  2 12:58:00 np0005605476 podman[267936]: 2026-02-02 17:58:00.916129858 +0000 UTC m=+0.141866977 container remove 4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 12:58:00 np0005605476 systemd[1]: libpod-conmon-4c2347f19de2ed266117ab13dba937cd536bea69572c152b1b2bb16cd04894ed.scope: Deactivated successfully.
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.041409437 +0000 UTC m=+0.033553486 container create 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 12:58:01 np0005605476 systemd[1]: Started libpod-conmon-30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4.scope.
Feb  2 12:58:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:58:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29800c3b44b12eb2750506b24f33a6e42d78d3529070a42a2c7c76c81b8ab343/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29800c3b44b12eb2750506b24f33a6e42d78d3529070a42a2c7c76c81b8ab343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29800c3b44b12eb2750506b24f33a6e42d78d3529070a42a2c7c76c81b8ab343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29800c3b44b12eb2750506b24f33a6e42d78d3529070a42a2c7c76c81b8ab343/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.027446754 +0000 UTC m=+0.019590823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.128709896 +0000 UTC m=+0.120853955 container init 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.136329091 +0000 UTC m=+0.128473140 container start 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.140457147 +0000 UTC m=+0.132601206 container attach 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:58:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 271 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 37 KiB/s wr, 110 op/s
Feb  2 12:58:01 np0005605476 lvm[268071]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:58:01 np0005605476 lvm[268072]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:58:01 np0005605476 lvm[268071]: VG ceph_vg0 finished
Feb  2 12:58:01 np0005605476 lvm[268072]: VG ceph_vg1 finished
Feb  2 12:58:01 np0005605476 lvm[268074]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:58:01 np0005605476 lvm[268074]: VG ceph_vg2 finished
Feb  2 12:58:01 np0005605476 sharp_neumann[267993]: {}
Feb  2 12:58:01 np0005605476 systemd[1]: libpod-30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4.scope: Deactivated successfully.
Feb  2 12:58:01 np0005605476 systemd[1]: libpod-30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4.scope: Consumed 1.018s CPU time.
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.886804899 +0000 UTC m=+0.878948988 container died 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:58:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-29800c3b44b12eb2750506b24f33a6e42d78d3529070a42a2c7c76c81b8ab343-merged.mount: Deactivated successfully.
Feb  2 12:58:01 np0005605476 podman[267976]: 2026-02-02 17:58:01.936691924 +0000 UTC m=+0.928835983 container remove 30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_neumann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:58:01 np0005605476 systemd[1]: libpod-conmon-30f64f8f4866d5580a8b26a7f5c0f6ecbe0e1f82737d60e1f77b10453ccb6fa4.scope: Deactivated successfully.
Feb  2 12:58:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:58:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:58:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:58:02 np0005605476 nova_compute[239846]: 2026-02-02 17:58:02.214 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:02 np0005605476 nova_compute[239846]: 2026-02-02 17:58:02.339 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028002349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028002349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:58:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:58:03 np0005605476 nova_compute[239846]: 2026-02-02 17:58:03.286 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 271 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 37 KiB/s wr, 110 op/s
Feb  2 12:58:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1285764245' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1285764245' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302710549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302710549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:05 np0005605476 nova_compute[239846]: 2026-02-02 17:58:05.268 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 30 KiB/s wr, 102 op/s
Feb  2 12:58:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:06 np0005605476 nova_compute[239846]: 2026-02-02 17:58:06.828 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055071.826972, 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:06 np0005605476 nova_compute[239846]: 2026-02-02 17:58:06.828 239853 INFO nova.compute.manager [-] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:58:06 np0005605476 nova_compute[239846]: 2026-02-02 17:58:06.845 239853 DEBUG nova.compute.manager [None req-a8b0c7c0-c773-4bef-9471-744309355947 - - - - - -] [instance: 918af4a9-09ac-4a18-b2bd-f7ea2c0e7452] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 22 KiB/s wr, 101 op/s
Feb  2 12:58:08 np0005605476 nova_compute[239846]: 2026-02-02 17:58:08.288 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 KiB/s wr, 52 op/s
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.270 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.695 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.695 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.711 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.782 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.783 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.791 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.792 239853 INFO nova.compute.claims [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.822 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055075.820906, eb6b61fa-cb2c-4e4d-be02-cdb398df790c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.822 239853 INFO nova.compute.manager [-] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.846 239853 DEBUG nova.compute.manager [None req-bef23919-add9-45b6-aa3d-06c0b36a5a6b - - - - - -] [instance: eb6b61fa-cb2c-4e4d-be02-cdb398df790c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:10 np0005605476 nova_compute[239846]: 2026-02-02 17:58:10.889 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284413160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.433 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.439 239853 DEBUG nova.compute.provider_tree [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.455 239853 DEBUG nova.scheduler.client.report [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.475 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.476 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.521 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.521 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.542 239853 INFO nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.562 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.618 239853 INFO nova.virt.block_device [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Booting with volume afd56270-31f2-45f6-8185-190fa9bfd997 at /dev/vda#033[00m
Feb  2 12:58:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 652 B/s wr, 27 op/s
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.848 239853 DEBUG os_brick.utils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.849 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.859 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.859 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5de0e0cc-05df-4884-bd2d-39edce035cd1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.861 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.867 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.867 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[63da3d83-9e85-460b-ae84-2e275c140dd3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.868 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.875 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.875 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2777fd-532c-4916-9829-896eb399d3b3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.876 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[85d2d306-25e4-47ce-9009-3d56515a4be7]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.877 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.894 239853 DEBUG nova.policy [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3de5c2f3ec44d4684754f1707ba5236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.898 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.899 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.900 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.900 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.900 239853 DEBUG os_brick.utils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:58:11 np0005605476 nova_compute[239846]: 2026-02-02 17:58:11.901 239853 DEBUG nova.virt.block_device [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updating existing volume attachment record: a9028cac-8303-4f1e-9206-fa36a6c98ad0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:58:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/714348887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:12 np0005605476 nova_compute[239846]: 2026-02-02 17:58:12.681 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Successfully created port: 07e92c78-e0a9-467a-bd04-99569e66ddf8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.115 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.117 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.117 239853 INFO nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Creating image(s)#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.117 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.118 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Ensure instance console log exists: /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.118 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.118 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.119 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.265 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055078.2644289, d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.266 239853 INFO nova.compute.manager [-] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.286 239853 DEBUG nova.compute.manager [None req-d3971bcf-cf10-4c97-acc8-fce10e078f63 - - - - - -] [instance: d7fdaddd-b417-4d8e-a3d7-a7132f04c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.290 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.502 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Successfully updated port: 07e92c78-e0a9-467a-bd04-99569e66ddf8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.517 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.518 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquired lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.518 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.602 239853 DEBUG nova.compute.manager [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-changed-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.603 239853 DEBUG nova.compute.manager [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Refreshing instance network info cache due to event network-changed-07e92c78-e0a9-467a-bd04-99569e66ddf8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.603 239853 DEBUG oslo_concurrency.lockutils [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 597 B/s wr, 25 op/s
Feb  2 12:58:13 np0005605476 nova_compute[239846]: 2026-02-02 17:58:13.639 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.631 239853 DEBUG nova.network.neutron [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updating instance_info_cache with network_info: [{"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.654 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Releasing lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.654 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Instance network_info: |[{"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.655 239853 DEBUG oslo_concurrency.lockutils [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.655 239853 DEBUG nova.network.neutron [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Refreshing network info cache for port 07e92c78-e0a9-467a-bd04-99569e66ddf8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.657 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Start _get_guest_xml network_info=[{"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': 'a9028cac-8303-4f1e-9206-fa36a6c98ad0', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '976b3ab3-0b37-4883-8fc0-b74a428132c9', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'serial': 'afd56270-31f2-45f6-8185-190fa9bfd997'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.661 239853 WARNING nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.665 239853 DEBUG nova.virt.libvirt.host [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.665 239853 DEBUG nova.virt.libvirt.host [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.668 239853 DEBUG nova.virt.libvirt.host [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.669 239853 DEBUG nova.virt.libvirt.host [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.669 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.669 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.670 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.670 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.670 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.670 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.670 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.671 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.671 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.671 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.671 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.672 239853 DEBUG nova.virt.hardware [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.691 239853 DEBUG nova.storage.rbd_utils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:14 np0005605476 nova_compute[239846]: 2026-02-02 17:58:14.694 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.073822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095073852, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2509, "num_deletes": 519, "total_data_size": 3314053, "memory_usage": 3391600, "flush_reason": "Manual Compaction"}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095091071, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3250300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29379, "largest_seqno": 31887, "table_properties": {"data_size": 3239345, "index_size": 6553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26424, "raw_average_key_size": 20, "raw_value_size": 3215122, "raw_average_value_size": 2450, "num_data_blocks": 286, "num_entries": 1312, "num_filter_entries": 1312, "num_deletions": 519, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770054917, "oldest_key_time": 1770054917, "file_creation_time": 1770055095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17280 microseconds, and 5106 cpu microseconds.
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.091103) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3250300 bytes OK
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.091118) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.093462) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.093504) EVENT_LOG_v1 {"time_micros": 1770055095093494, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.093530) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3302367, prev total WAL file size 3302408, number of live WAL files 2.
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.094204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3174KB)], [62(8912KB)]
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095094240, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12376756, "oldest_snapshot_seqno": -1}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6234 keys, 10507439 bytes, temperature: kUnknown
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095146103, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10507439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10459460, "index_size": 31293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157140, "raw_average_key_size": 25, "raw_value_size": 10341154, "raw_average_value_size": 1658, "num_data_blocks": 1255, "num_entries": 6234, "num_filter_entries": 6234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.146385) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10507439 bytes
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.147227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.1 rd, 202.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7280, records dropped: 1046 output_compression: NoCompression
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.147246) EVENT_LOG_v1 {"time_micros": 1770055095147237, "job": 34, "event": "compaction_finished", "compaction_time_micros": 51976, "compaction_time_cpu_micros": 22709, "output_level": 6, "num_output_files": 1, "total_output_size": 10507439, "num_input_records": 7280, "num_output_records": 6234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095147539, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055095148324, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.094159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.148351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.148355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.148357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.148359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-17:58:15.148361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018481029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.255 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.272 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.360 239853 DEBUG os_brick.encryptors [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '976b3ab3-0b37-4883-8fc0-b74a428132c9', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd56270-31f2-45f6-8185-190fa9bfd997', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.362 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.376 239853 DEBUG barbicanclient.v1.secrets [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.376 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.404 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.405 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.429 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.430 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.454 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.454 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.471 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.472 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.491 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.492 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.518 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.519 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.541 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.542 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.569 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.569 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.586 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.587 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.609 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.610 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 818 B/s wr, 10 op/s
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.630 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.631 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.648 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.649 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.669 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.670 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.696 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.697 239853 INFO barbicanclient.base [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/a1c01ffd-a7b4-4ad4-8bcc-e1a9e3c2d0e4#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.719 239853 DEBUG barbicanclient.client [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.720 239853 DEBUG nova.virt.libvirt.host [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <volume>afd56270-31f2-45f6-8185-190fa9bfd997</volume>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:58:15 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:58:15 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.748 239853 DEBUG nova.virt.libvirt.vif [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1551732628',display_name='tempest-TransferEncryptedVolumeTest-server-1551732628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1551732628',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-oag46rsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:58:11Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=976b3ab3-0b37-4883-8fc0-b74a428132c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.750 239853 DEBUG nova.network.os_vif_util [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.751 239853 DEBUG nova.network.os_vif_util [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.753 239853 DEBUG nova.objects.instance [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 976b3ab3-0b37-4883-8fc0-b74a428132c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.767 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <uuid>976b3ab3-0b37-4883-8fc0-b74a428132c9</uuid>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <name>instance-00000018</name>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1551732628</nova:name>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:58:14</nova:creationTime>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:user uuid="a3de5c2f3ec44d4684754f1707ba5236">tempest-TransferEncryptedVolumeTest-1386167090-project-member</nova:user>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:project uuid="224fb1fcaf0e4ffb9c3e3e7792ff25c6">tempest-TransferEncryptedVolumeTest-1386167090</nova:project>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <nova:port uuid="07e92c78-e0a9-467a-bd04-99569e66ddf8">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="serial">976b3ab3-0b37-4883-8fc0-b74a428132c9</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="uuid">976b3ab3-0b37-4883-8fc0-b74a428132c9</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-afd56270-31f2-45f6-8185-190fa9bfd997">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <serial>afd56270-31f2-45f6-8185-190fa9bfd997</serial>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="50f89de9-0877-497f-817b-3a41b617336c"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:86:c6:3b"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <target dev="tap07e92c78-e0"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/console.log" append="off"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:58:15 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:58:15 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:58:15 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:58:15 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.768 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Preparing to wait for external event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.768 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.768 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.769 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.770 239853 DEBUG nova.virt.libvirt.vif [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1551732628',display_name='tempest-TransferEncryptedVolumeTest-server-1551732628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1551732628',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-oag46rsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:58:11Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=976b3ab3-0b37-4883-8fc0-b74a428132c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.770 239853 DEBUG nova.network.os_vif_util [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.771 239853 DEBUG nova.network.os_vif_util [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.771 239853 DEBUG os_vif [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.772 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.772 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.773 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.776 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.776 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07e92c78-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.776 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07e92c78-e0, col_values=(('external_ids', {'iface-id': '07e92c78-e0a9-467a-bd04-99569e66ddf8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:c6:3b', 'vm-uuid': '976b3ab3-0b37-4883-8fc0-b74a428132c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.778 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:15 np0005605476 NetworkManager[49022]: <info>  [1770055095.7794] manager: (tap07e92c78-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.781 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.786 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.787 239853 INFO os_vif [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0')#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.840 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.842 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.843 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No VIF found with MAC fa:16:3e:86:c6:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.843 239853 INFO nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Using config drive#033[00m
Feb  2 12:58:15 np0005605476 nova_compute[239846]: 2026-02-02 17:58:15.863 239853 DEBUG nova.storage.rbd_utils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.460 239853 DEBUG nova.network.neutron [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updated VIF entry in instance network info cache for port 07e92c78-e0a9-467a-bd04-99569e66ddf8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.463 239853 DEBUG nova.network.neutron [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updating instance_info_cache with network_info: [{"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.491 239853 DEBUG oslo_concurrency.lockutils [req-3857d154-3144-45ac-a08d-d02a1808973e req-fac9d33d-735c-4e0f-ab93-518a4e2bc4b5 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.573 239853 INFO nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Creating config drive at /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.579 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplqkrr5ak execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.709 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplqkrr5ak" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.731 239853 DEBUG nova.storage.rbd_utils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.734 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config 976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.833 239853 DEBUG oslo_concurrency.processutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config 976b3ab3-0b37-4883-8fc0-b74a428132c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.834 239853 INFO nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Deleting local config drive /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9/disk.config because it was imported into RBD.#033[00m
Feb  2 12:58:16 np0005605476 kernel: tap07e92c78-e0: entered promiscuous mode
Feb  2 12:58:16 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:16Z|00227|binding|INFO|Claiming lport 07e92c78-e0a9-467a-bd04-99569e66ddf8 for this chassis.
Feb  2 12:58:16 np0005605476 NetworkManager[49022]: <info>  [1770055096.8784] manager: (tap07e92c78-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Feb  2 12:58:16 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:16Z|00228|binding|INFO|07e92c78-e0a9-467a-bd04-99569e66ddf8: Claiming fa:16:3e:86:c6:3b 10.100.0.7
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.879 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.881 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.883 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.891 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c6:3b 10.100.0.7'], port_security=['fa:16:3e:86:c6:3b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '976b3ab3-0b37-4883-8fc0-b74a428132c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ed4d424-2957-4e57-bfeb-8d8148412d60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=07e92c78-e0a9-467a-bd04-99569e66ddf8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.892 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 07e92c78-e0a9-467a-bd04-99569e66ddf8 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 bound to our chassis#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.893 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82a7f311-fed2-4a09-8203-270dceb25c76#033[00m
Feb  2 12:58:16 np0005605476 systemd-udevd[268260]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:58:16 np0005605476 systemd-machined[208080]: New machine qemu-24-instance-00000018.
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.903 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[61fb5a6e-a161-4d50-b91f-dcbcc2b03c38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.904 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82a7f311-f1 in ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.905 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82a7f311-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.906 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e79365cd-f8dc-4e3c-b6b5-591ba2a33b1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.906 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c263e34b-7181-4a0d-9d5d-5c04292a895d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 NetworkManager[49022]: <info>  [1770055096.9108] device (tap07e92c78-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:58:16 np0005605476 NetworkManager[49022]: <info>  [1770055096.9116] device (tap07e92c78-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.916 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[a5799cd2-0533-48ca-8d0c-2940f3efb5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.921 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:16 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:16Z|00229|binding|INFO|Setting lport 07e92c78-e0a9-467a-bd04-99569e66ddf8 ovn-installed in OVS
Feb  2 12:58:16 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:16Z|00230|binding|INFO|Setting lport 07e92c78-e0a9-467a-bd04-99569e66ddf8 up in Southbound
Feb  2 12:58:16 np0005605476 nova_compute[239846]: 2026-02-02 17:58:16.924 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.925 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[37eb50a0-c5a6-438e-b981-c16160e21ca8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.946 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[4be1ed8d-baaf-4638-b167-1b451143bd8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 systemd-udevd[268263]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:58:16 np0005605476 NetworkManager[49022]: <info>  [1770055096.9521] manager: (tap82a7f311-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.951 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c60a0f9-c954-4d4b-a6dd-b52b08e31106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.972 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ed7a53-7e9a-42ac-85fe-8fb22a8cb67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.975 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[8c23bbee-ce0a-4259-aef7-9d6074c4deee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:16 np0005605476 NetworkManager[49022]: <info>  [1770055096.9887] device (tap82a7f311-f0): carrier: link connected
Feb  2 12:58:16 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:16.991 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[5feb6f6b-d876-428e-bb70-83299e156873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.003 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6f7f6b-9fd7-473a-b061-93b64207bdc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433832, 'reachable_time': 25754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268292, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.013 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad21849-cccb-41b4-8444-58aa06085259]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:34d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433832, 'tstamp': 433832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268293, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.022 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[acdf7935-9aee-40ec-9c8c-9891f937617f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433832, 'reachable_time': 25754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268294, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.041 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1751f19d-e126-488f-98df-036e12316521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.078 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[586628a3-f60f-4c30-b36b-898e57ff042e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.079 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.079 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.080 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82a7f311-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:17 np0005605476 kernel: tap82a7f311-f0: entered promiscuous mode
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.081 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:17 np0005605476 NetworkManager[49022]: <info>  [1770055097.0851] manager: (tap82a7f311-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.085 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82a7f311-f0, col_values=(('external_ids', {'iface-id': '51e5cd2d-8b15-4de8-985f-c87fe41124e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:17 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:17Z|00231|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.086 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.089 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.090 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ea63a6-7ea7-4c58-80f0-1a2a3d2457d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.091 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:58:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:17.091 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'env', 'PROCESS_TAG=haproxy-82a7f311-fed2-4a09-8203-270dceb25c76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82a7f311-fed2-4a09-8203-270dceb25c76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.092 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.417 239853 DEBUG nova.compute.manager [req-e44aa31f-a17d-402b-aeee-d6dbfb7cd584 req-b1110421-208f-4ad4-a0ba-1c9a438dc520 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.417 239853 DEBUG oslo_concurrency.lockutils [req-e44aa31f-a17d-402b-aeee-d6dbfb7cd584 req-b1110421-208f-4ad4-a0ba-1c9a438dc520 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.418 239853 DEBUG oslo_concurrency.lockutils [req-e44aa31f-a17d-402b-aeee-d6dbfb7cd584 req-b1110421-208f-4ad4-a0ba-1c9a438dc520 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.418 239853 DEBUG oslo_concurrency.lockutils [req-e44aa31f-a17d-402b-aeee-d6dbfb7cd584 req-b1110421-208f-4ad4-a0ba-1c9a438dc520 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:17 np0005605476 nova_compute[239846]: 2026-02-02 17:58:17.418 239853 DEBUG nova.compute.manager [req-e44aa31f-a17d-402b-aeee-d6dbfb7cd584 req-b1110421-208f-4ad4-a0ba-1c9a438dc520 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Processing event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:58:17 np0005605476 podman[268326]: 2026-02-02 17:58:17.428292676 +0000 UTC m=+0.038558727 container create 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 12:58:17 np0005605476 systemd[1]: Started libpod-conmon-354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712.scope.
Feb  2 12:58:17 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:58:17 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e04ecf8b54b94559645768eea4ed5a5caec9fb44cf0efbbf7cd161541dfb39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:17 np0005605476 podman[268326]: 2026-02-02 17:58:17.488570034 +0000 UTC m=+0.098836115 container init 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:58:17 np0005605476 podman[268326]: 2026-02-02 17:58:17.493380449 +0000 UTC m=+0.103646500 container start 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:58:17 np0005605476 podman[268326]: 2026-02-02 17:58:17.407647874 +0000 UTC m=+0.017913945 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:58:17 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [NOTICE]   (268381) : New worker (268383) forked
Feb  2 12:58:17 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [NOTICE]   (268381) : Loading success.
Feb  2 12:58:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 B/s wr, 9 op/s
Feb  2 12:58:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/140192495' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/140192495' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498131645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498131645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.496 239853 DEBUG nova.compute.manager [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.497 239853 DEBUG oslo_concurrency.lockutils [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.497 239853 DEBUG oslo_concurrency.lockutils [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.497 239853 DEBUG oslo_concurrency.lockutils [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.497 239853 DEBUG nova.compute.manager [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] No waiting events found dispatching network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.497 239853 WARNING nova.compute.manager [req-e2883283-535f-4aca-af68-85a08684faa0 req-14b4bb07-bb95-4078-ade3-ce90733e99c4 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received unexpected event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:58:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 KiB/s wr, 45 op/s
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.804 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055099.8038318, 976b3ab3-0b37-4883-8fc0-b74a428132c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.805 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] VM Started (Lifecycle Event)#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.807 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.811 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.814 239853 INFO nova.virt.libvirt.driver [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Instance spawned successfully.#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.814 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.851 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.855 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.876 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.876 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055099.803998, 976b3ab3-0b37-4883-8fc0-b74a428132c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.876 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.880 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.881 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.881 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.882 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.882 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.882 239853 DEBUG nova.virt.libvirt.driver [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.957 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.960 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055099.8102791, 976b3ab3-0b37-4883-8fc0-b74a428132c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:19 np0005605476 nova_compute[239846]: 2026-02-02 17:58:19.961 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.034 239853 INFO nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.034 239853 DEBUG nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.036 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.044 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.080 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.119 239853 INFO nova.compute.manager [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Took 9.37 seconds to build instance.#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.137 239853 DEBUG oslo_concurrency.lockutils [None req-6ec59933-c223-4c46-bc77-8f90bcbdf80a a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.275 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/834330159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/834330159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:20 np0005605476 nova_compute[239846]: 2026-02-02 17:58:20.778 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 109 op/s
Feb  2 12:58:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1052250175' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1052250175' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:22 np0005605476 NetworkManager[49022]: <info>  [1770055102.2142] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Feb  2 12:58:22 np0005605476 NetworkManager[49022]: <info>  [1770055102.2148] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.213 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.268 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:22Z|00232|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.283 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.474 239853 DEBUG nova.compute.manager [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-changed-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.474 239853 DEBUG nova.compute.manager [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Refreshing instance network info cache due to event network-changed-07e92c78-e0a9-467a-bd04-99569e66ddf8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.475 239853 DEBUG oslo_concurrency.lockutils [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.475 239853 DEBUG oslo_concurrency.lockutils [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:22 np0005605476 nova_compute[239846]: 2026-02-02 17:58:22.475 239853 DEBUG nova.network.neutron [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Refreshing network info cache for port 07e92c78-e0a9-467a-bd04-99569e66ddf8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:58:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443965425' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:22 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:22 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443965425' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:23 np0005605476 nova_compute[239846]: 2026-02-02 17:58:23.591 239853 DEBUG nova.network.neutron [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updated VIF entry in instance network info cache for port 07e92c78-e0a9-467a-bd04-99569e66ddf8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:58:23 np0005605476 nova_compute[239846]: 2026-02-02 17:58:23.592 239853 DEBUG nova.network.neutron [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updating instance_info_cache with network_info: [{"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:23 np0005605476 nova_compute[239846]: 2026-02-02 17:58:23.618 239853 DEBUG oslo_concurrency.lockutils [req-76e98b1a-6154-4e9d-bf80-4e4eb0c91702 req-940dacaf-ed7d-4d24-b51b-5506abfc5b71 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-976b3ab3-0b37-4883-8fc0-b74a428132c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 19 KiB/s wr, 109 op/s
Feb  2 12:58:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Feb  2 12:58:24 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Feb  2 12:58:24 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Feb  2 12:58:25 np0005605476 nova_compute[239846]: 2026-02-02 17:58:25.309 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448829194' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:25 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448829194' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:25 np0005605476 podman[268399]: 2026-02-02 17:58:25.593214364 +0000 UTC m=+0.041637264 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:58:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 46 KiB/s wr, 200 op/s
Feb  2 12:58:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:25 np0005605476 nova_compute[239846]: 2026-02-02 17:58:25.780 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 46 KiB/s wr, 200 op/s
Feb  2 12:58:28 np0005605476 nova_compute[239846]: 2026-02-02 17:58:28.444 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:28 np0005605476 nova_compute[239846]: 2026-02-02 17:58:28.462 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Triggering sync for uuid 976b3ab3-0b37-4883-8fc0-b74a428132c9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb  2 12:58:28 np0005605476 nova_compute[239846]: 2026-02-02 17:58:28.462 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:28 np0005605476 nova_compute[239846]: 2026-02-02 17:58:28.462 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:28 np0005605476 nova_compute[239846]: 2026-02-02 17:58:28.501 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:29 np0005605476 podman[268419]: 2026-02-02 17:58:29.619794598 +0000 UTC m=+0.063175190 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:58:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 180 op/s
Feb  2 12:58:30 np0005605476 nova_compute[239846]: 2026-02-02 17:58:30.309 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Feb  2 12:58:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Feb  2 12:58:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Feb  2 12:58:30 np0005605476 nova_compute[239846]: 2026-02-02 17:58:30.782 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:31 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:31Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:c6:3b 10.100.0.7
Feb  2 12:58:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 182 op/s
Feb  2 12:58:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Feb  2 12:58:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Feb  2 12:58:32 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Feb  2 12:58:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 435 KiB/s rd, 9.4 KiB/s wr, 57 op/s
Feb  2 12:58:35 np0005605476 nova_compute[239846]: 2026-02-02 17:58:35.310 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 385 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 976 KiB/s rd, 14 MiB/s wr, 170 op/s
Feb  2 12:58:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:35 np0005605476 nova_compute[239846]: 2026-02-02 17:58:35.818 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Feb  2 12:58:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Feb  2 12:58:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Feb  2 12:58:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:58:36
Feb  2 12:58:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:58:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:58:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'images']
Feb  2 12:58:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.802 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.803 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.819 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.888 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.888 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.897 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:58:36 np0005605476 nova_compute[239846]: 2026-02-02 17:58:36.897 239853 INFO nova.compute.claims [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.006 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.260 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:58:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672578589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.607 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.613 239853 DEBUG nova.compute.provider_tree [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 385 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 16 MiB/s wr, 128 op/s
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.639 239853 DEBUG nova.scheduler.client.report [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.667 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.668 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.721 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.722 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.746 239853 INFO nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:58:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.766 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.840 239853 INFO nova.virt.block_device [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Booting with volume f31789b2-6519-4b8f-a054-f331ed834946 at /dev/vda#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.920 239853 DEBUG nova.policy [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c00d8fbb7f314affbdd560b88d4ce236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.982 239853 DEBUG os_brick.utils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.983 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.990 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.991 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5caf976d-434a-4c52-98fa-652fbe2465de]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.992 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.997 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.997 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[30533aed-9fc1-4053-84d2-39ef8f93bc4f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:37 np0005605476 nova_compute[239846]: 2026-02-02 17:58:37.998 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.003 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.003 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0640e10d-d239-458b-b615-3fadecff5c47]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.005 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[65e0f99c-40a2-4c45-8573-5371055a9fd0]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.005 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.021 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.023 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.023 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.024 239853 DEBUG os_brick.initiator.connectors.lightos [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.024 239853 DEBUG os_brick.utils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.025 239853 DEBUG nova.virt.block_device [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updating existing volume attachment record: 6cbaac5b-8e6d-448c-937c-3e7932f6e10d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.058 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.058 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.059 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.059 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.059 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.061 239853 INFO nova.compute.manager [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Terminating instance#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.062 239853 DEBUG nova.compute.manager [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:58:38 np0005605476 kernel: tap07e92c78-e0 (unregistering): left promiscuous mode
Feb  2 12:58:38 np0005605476 NetworkManager[49022]: <info>  [1770055118.1171] device (tap07e92c78-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:58:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:38Z|00233|binding|INFO|Releasing lport 07e92c78-e0a9-467a-bd04-99569e66ddf8 from this chassis (sb_readonly=0)
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.151 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:38Z|00234|binding|INFO|Setting lport 07e92c78-e0a9-467a-bd04-99569e66ddf8 down in Southbound
Feb  2 12:58:38 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:38Z|00235|binding|INFO|Removing iface tap07e92c78-e0 ovn-installed in OVS
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.155 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.159 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.163 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:c6:3b 10.100.0.7'], port_security=['fa:16:3e:86:c6:3b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '976b3ab3-0b37-4883-8fc0-b74a428132c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed4d424-2957-4e57-bfeb-8d8148412d60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=07e92c78-e0a9-467a-bd04-99569e66ddf8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.165 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 07e92c78-e0a9-467a-bd04-99569e66ddf8 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 unbound from our chassis#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.167 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82a7f311-fed2-4a09-8203-270dceb25c76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.168 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd71a8b-8564-4960-b0e4-8907445d8749]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.170 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace which is not needed anymore#033[00m
Feb  2 12:58:38 np0005605476 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb  2 12:58:38 np0005605476 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 13.782s CPU time.
Feb  2 12:58:38 np0005605476 systemd-machined[208080]: Machine qemu-24-instance-00000018 terminated.
Feb  2 12:58:38 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [NOTICE]   (268381) : haproxy version is 2.8.14-c23fe91
Feb  2 12:58:38 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [NOTICE]   (268381) : path to executable is /usr/sbin/haproxy
Feb  2 12:58:38 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [WARNING]  (268381) : Exiting Master process...
Feb  2 12:58:38 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [ALERT]    (268381) : Current worker (268383) exited with code 143 (Terminated)
Feb  2 12:58:38 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[268375]: [WARNING]  (268381) : All workers exited. Exiting... (0)
Feb  2 12:58:38 np0005605476 systemd[1]: libpod-354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712.scope: Deactivated successfully.
Feb  2 12:58:38 np0005605476 podman[268497]: 2026-02-02 17:58:38.267435404 +0000 UTC m=+0.037331282 container died 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.276 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.279 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.290 239853 INFO nova.virt.libvirt.driver [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Instance destroyed successfully.#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.291 239853 DEBUG nova.objects.instance [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'resources' on Instance uuid 976b3ab3-0b37-4883-8fc0-b74a428132c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:58:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712-userdata-shm.mount: Deactivated successfully.
Feb  2 12:58:38 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e6e04ecf8b54b94559645768eea4ed5a5caec9fb44cf0efbbf7cd161541dfb39-merged.mount: Deactivated successfully.
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.309 239853 DEBUG nova.virt.libvirt.vif [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1551732628',display_name='tempest-TransferEncryptedVolumeTest-server-1551732628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1551732628',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL20dj+iLlPBhH3QetkanxJ9arz9zWPbMqxHF1jKWT7VB0QW6ft94fhnX+HrFOgf7uyZxPcpCBhY76SvWEIeIoV2yuERlEGnIqFJm93zg5/GYQuktWiQ/7fXyq3RvecBzA==',key_name='tempest-TransferEncryptedVolumeTest-1523216110',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:58:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-oag46rsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:58:20Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=976b3ab3-0b37-4883-8fc0-b74a428132c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.310 239853 DEBUG nova.network.os_vif_util [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "address": "fa:16:3e:86:c6:3b", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e92c78-e0", "ovs_interfaceid": "07e92c78-e0a9-467a-bd04-99569e66ddf8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.310 239853 DEBUG nova.network.os_vif_util [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:58:38 np0005605476 podman[268497]: 2026-02-02 17:58:38.311230818 +0000 UTC m=+0.081126696 container cleanup 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.311 239853 DEBUG os_vif [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.312 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.313 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07e92c78-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.314 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 systemd[1]: libpod-conmon-354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712.scope: Deactivated successfully.
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.316 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.320 239853 INFO os_vif [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:c6:3b,bridge_name='br-int',has_traffic_filtering=True,id=07e92c78-e0a9-467a-bd04-99569e66ddf8,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e92c78-e0')#033[00m
Feb  2 12:58:38 np0005605476 podman[268535]: 2026-02-02 17:58:38.410305238 +0000 UTC m=+0.081168307 container remove 354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.415 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d5db8865-0862-412a-87d4-a7b89f4a129a]: (4, ('Mon Feb  2 05:58:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712)\n354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712\nMon Feb  2 05:58:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712)\n354c39297dfcd6f80ea06cbef6ef65bd101433376d89ce48c28a5cfb64cd9712\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.416 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[13e4bc06-3431-4814-a0e0-b60a8e6bad93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.417 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:38 np0005605476 kernel: tap82a7f311-f0: left promiscuous mode
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.419 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.425 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.425 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac10959-847d-457c-9e89-809c9739244a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.433 239853 DEBUG nova.compute.manager [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-unplugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.434 239853 DEBUG oslo_concurrency.lockutils [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.434 239853 DEBUG oslo_concurrency.lockutils [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.435 239853 DEBUG oslo_concurrency.lockutils [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.435 239853 DEBUG nova.compute.manager [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] No waiting events found dispatching network-vif-unplugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.435 239853 DEBUG nova.compute.manager [req-d03400cb-2cf0-40bc-b61d-6ea4639ee2c3 req-1e9b4aa7-287a-48ab-8154-45703a41a7a6 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-unplugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.436 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[25e35262-0ae5-4f21-ab71-74089bea9d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.438 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[daafcd49-b9b3-449c-a91b-e6fd5c3b9c1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.450 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a8a998-cb8e-4664-9632-cd83fedde220]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433827, 'reachable_time': 31189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268565, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 systemd[1]: run-netns-ovnmeta\x2d82a7f311\x2dfed2\x2d4a09\x2d8203\x2d270dceb25c76.mount: Deactivated successfully.
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.455 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:58:38 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:38.455 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[89fa374e-b491-4b52-ada4-e47be0234b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.576 239853 INFO nova.virt.libvirt.driver [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Deleting instance files /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9_del#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.577 239853 INFO nova.virt.libvirt.driver [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Deletion of /var/lib/nova/instances/976b3ab3-0b37-4883-8fc0-b74a428132c9_del complete#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.647 239853 INFO nova.compute.manager [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.648 239853 DEBUG oslo.service.loopingcall [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.649 239853 DEBUG nova.compute.manager [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.649 239853 DEBUG nova.network.neutron [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:58:38 np0005605476 nova_compute[239846]: 2026-02-02 17:58:38.747 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Successfully created port: cbf4b62b-3e45-41de-b963-93333041132a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:58:38 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:38 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2907759341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.182 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.183 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.184 239853 INFO nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Creating image(s)#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.184 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.184 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Ensure instance console log exists: /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.185 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.185 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.185 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.550 239853 DEBUG nova.network.neutron [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.588 239853 INFO nova.compute.manager [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.604 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Successfully updated port: cbf4b62b-3e45-41de-b963-93333041132a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.617 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.618 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquired lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.618 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:58:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 385 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 14 MiB/s wr, 156 op/s
Feb  2 12:58:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Feb  2 12:58:39 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Feb  2 12:58:39 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.741 239853 DEBUG nova.compute.manager [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-changed-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.742 239853 DEBUG nova.compute.manager [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Refreshing instance network info cache due to event network-changed-cbf4b62b-3e45-41de-b963-93333041132a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:58:39 np0005605476 nova_compute[239846]: 2026-02-02 17:58:39.742 239853 DEBUG oslo_concurrency.lockutils [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.014 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.023 239853 INFO nova.compute.manager [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Took 0.44 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.071 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.072 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.131 239853 DEBUG oslo_concurrency.processutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.312 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261648485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261648485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.514 239853 DEBUG nova.compute.manager [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.514 239853 DEBUG oslo_concurrency.lockutils [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.515 239853 DEBUG oslo_concurrency.lockutils [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.515 239853 DEBUG oslo_concurrency.lockutils [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.515 239853 DEBUG nova.compute.manager [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] No waiting events found dispatching network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.515 239853 WARNING nova.compute.manager [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received unexpected event network-vif-plugged-07e92c78-e0a9-467a-bd04-99569e66ddf8 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.515 239853 DEBUG nova.compute.manager [req-c4ff4c6a-26b6-488b-8be1-070e4ca3cd9d req-ff2fa53e-4c82-40f2-a381-c73a4eab8a42 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Received event network-vif-deleted-07e92c78-e0a9-467a-bd04-99569e66ddf8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295899641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.670 239853 DEBUG oslo_concurrency.processutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.675 239853 DEBUG nova.compute.provider_tree [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.688 239853 DEBUG nova.scheduler.client.report [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.718 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.734 239853 DEBUG nova.network.neutron [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updating instance_info_cache with network_info: [{"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.746 239853 INFO nova.scheduler.client.report [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Deleted allocations for instance 976b3ab3-0b37-4883-8fc0-b74a428132c9#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.761 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Releasing lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.762 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Instance network_info: |[{"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.763 239853 DEBUG oslo_concurrency.lockutils [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.763 239853 DEBUG nova.network.neutron [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Refreshing network info cache for port cbf4b62b-3e45-41de-b963-93333041132a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.767 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Start _get_guest_xml network_info=[{"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '6cbaac5b-8e6d-448c-937c-3e7932f6e10d', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f31789b2-6519-4b8f-a054-f331ed834946', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f31789b2-6519-4b8f-a054-f331ed834946', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c88ba03a-1274-4c23-9615-70cad271dad9', 'attached_at': '', 'detached_at': '', 'volume_id': 'f31789b2-6519-4b8f-a054-f331ed834946', 'serial': 'f31789b2-6519-4b8f-a054-f331ed834946'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.774 239853 WARNING nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.778 239853 DEBUG nova.virt.libvirt.host [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.779 239853 DEBUG nova.virt.libvirt.host [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.781 239853 DEBUG nova.virt.libvirt.host [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.781 239853 DEBUG nova.virt.libvirt.host [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.782 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.782 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.782 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.782 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.783 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.784 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.784 239853 DEBUG nova.virt.hardware [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.803 239853 DEBUG nova.storage.rbd_utils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image c88ba03a-1274-4c23-9615-70cad271dad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.807 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:40 np0005605476 nova_compute[239846]: 2026-02-02 17:58:40.827 239853 DEBUG oslo_concurrency.lockutils [None req-e53c3609-25bd-4e29-8ea6-dcbbb5fc4f82 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "976b3ab3-0b37-4883-8fc0-b74a428132c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:41 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84913382' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.306 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.440 239853 DEBUG os_brick.encryptors [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Using volume encryption metadata '{'encryption_key_id': '6861d8af-6c57-4c2f-aeab-2af19f9331d0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f31789b2-6519-4b8f-a054-f331ed834946', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f31789b2-6519-4b8f-a054-f331ed834946', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c88ba03a-1274-4c23-9615-70cad271dad9', 'attached_at': '', 'detached_at': '', 'volume_id': 'f31789b2-6519-4b8f-a054-f331ed834946', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.441 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.456 239853 DEBUG barbicanclient.v1.secrets [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.456 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.476 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.476 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.498 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.499 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.519 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.520 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.545 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.545 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.570 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.571 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.594 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.594 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.624 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.625 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 385 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 893 KiB/s rd, 14 MiB/s wr, 172 op/s
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.661 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.662 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.679 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.680 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.703 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.703 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Feb  2 12:58:41 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Feb  2 12:58:41 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.732 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.733 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.766 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.767 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.785 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.785 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.812 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.813 239853 INFO barbicanclient.base [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/6861d8af-6c57-4c2f-aeab-2af19f9331d0#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.833 239853 DEBUG barbicanclient.client [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.834 239853 DEBUG nova.virt.libvirt.host [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <volume>f31789b2-6519-4b8f-a054-f331ed834946</volume>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:58:41 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:58:41 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.861 239853 DEBUG nova.virt.libvirt.vif [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:58:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1258712006',display_name='tempest-TestEncryptedCinderVolumes-server-1258712006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1258712006',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-kv4n0fnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:58:37Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=c88ba03a-1274-4c23-9615-70cad271dad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.862 239853 DEBUG nova.network.os_vif_util [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.862 239853 DEBUG nova.network.os_vif_util [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.863 239853 DEBUG nova.objects.instance [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'pci_devices' on Instance uuid c88ba03a-1274-4c23-9615-70cad271dad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.877 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <uuid>c88ba03a-1274-4c23-9615-70cad271dad9</uuid>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <name>instance-00000019</name>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1258712006</nova:name>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:58:40</nova:creationTime>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:user uuid="c00d8fbb7f314affbdd560b88d4ce236">tempest-TestEncryptedCinderVolumes-1563506128-project-member</nova:user>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:project uuid="f1ccd20d4c994d098fc29da09fe94797">tempest-TestEncryptedCinderVolumes-1563506128</nova:project>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <nova:port uuid="cbf4b62b-3e45-41de-b963-93333041132a">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="serial">c88ba03a-1274-4c23-9615-70cad271dad9</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="uuid">c88ba03a-1274-4c23-9615-70cad271dad9</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/c88ba03a-1274-4c23-9615-70cad271dad9_disk.config">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-f31789b2-6519-4b8f-a054-f331ed834946">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <serial>f31789b2-6519-4b8f-a054-f331ed834946</serial>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="71ffe41a-dac8-4c01-b478-a65a469c0547"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:9b:07:7d"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <target dev="tapcbf4b62b-3e"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/console.log" append="off"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:58:41 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:58:41 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:58:41 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:58:41 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.878 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Preparing to wait for external event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.878 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.878 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.878 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.879 239853 DEBUG nova.virt.libvirt.vif [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:58:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1258712006',display_name='tempest-TestEncryptedCinderVolumes-server-1258712006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1258712006',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-kv4n0fnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:58:37Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=c88ba03a-1274-4c23-9615-70cad271dad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.879 239853 DEBUG nova.network.os_vif_util [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.880 239853 DEBUG nova.network.os_vif_util [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.880 239853 DEBUG os_vif [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.881 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.881 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.881 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.883 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.883 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf4b62b-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.884 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbf4b62b-3e, col_values=(('external_ids', {'iface-id': 'cbf4b62b-3e45-41de-b963-93333041132a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:07:7d', 'vm-uuid': 'c88ba03a-1274-4c23-9615-70cad271dad9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.885 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:41 np0005605476 NetworkManager[49022]: <info>  [1770055121.8861] manager: (tapcbf4b62b-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.887 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.890 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.890 239853 INFO os_vif [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e')#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.933 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.934 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.934 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No VIF found with MAC fa:16:3e:9b:07:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.934 239853 INFO nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Using config drive#033[00m
Feb  2 12:58:41 np0005605476 nova_compute[239846]: 2026-02-02 17:58:41.950 239853 DEBUG nova.storage.rbd_utils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image c88ba03a-1274-4c23-9615-70cad271dad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.134 239853 DEBUG nova.network.neutron [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updated VIF entry in instance network info cache for port cbf4b62b-3e45-41de-b963-93333041132a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.134 239853 DEBUG nova.network.neutron [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updating instance_info_cache with network_info: [{"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.158 239853 DEBUG oslo_concurrency.lockutils [req-6729b514-2393-47e7-9cc2-7979fe721a4a req-f857de4e-9c22-472c-9706-a3b879230d5f e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.238 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.265 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.266 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.267 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.267 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.267 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.313 239853 INFO nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Creating config drive at /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.317 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxd2zphm9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.436 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxd2zphm9" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.462 239853 DEBUG nova.storage.rbd_utils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image c88ba03a-1274-4c23-9615-70cad271dad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.466 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config c88ba03a-1274-4c23-9615-70cad271dad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.578 239853 DEBUG oslo_concurrency.processutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config c88ba03a-1274-4c23-9615-70cad271dad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.579 239853 INFO nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Deleting local config drive /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9/disk.config because it was imported into RBD.#033[00m
Feb  2 12:58:42 np0005605476 kernel: tapcbf4b62b-3e: entered promiscuous mode
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.6432] manager: (tapcbf4b62b-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Feb  2 12:58:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:42Z|00236|binding|INFO|Claiming lport cbf4b62b-3e45-41de-b963-93333041132a for this chassis.
Feb  2 12:58:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:42Z|00237|binding|INFO|cbf4b62b-3e45-41de-b963-93333041132a: Claiming fa:16:3e:9b:07:7d 10.100.0.6
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.649 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.658 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:07:7d 10.100.0.6'], port_security=['fa:16:3e:9b:07:7d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c88ba03a-1274-4c23-9615-70cad271dad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5c671fc8-95a7-4695-88ca-6053121c3610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=cbf4b62b-3e45-41de-b963-93333041132a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.659 155391 INFO neutron.agent.ovn.metadata.agent [-] Port cbf4b62b-3e45-41de-b963-93333041132a in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec bound to our chassis#033[00m
Feb  2 12:58:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:42Z|00238|binding|INFO|Setting lport cbf4b62b-3e45-41de-b963-93333041132a ovn-installed in OVS
Feb  2 12:58:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:42Z|00239|binding|INFO|Setting lport cbf4b62b-3e45-41de-b963-93333041132a up in Southbound
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.665 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.665 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bad2c851-1c12-4a83-9873-6096fe5f4eec#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.667 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 systemd-udevd[268721]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:58:42 np0005605476 systemd-machined[208080]: New machine qemu-25-instance-00000019.
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.6872] device (tapcbf4b62b-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.6878] device (tapcbf4b62b-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.687 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d67f08d-2170-4cfe-aa3a-e78e559f491d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.688 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbad2c851-11 in ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.690 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbad2c851-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.690 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[49cb82a6-a1e9-42e6-94f0-b25980274a2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.692 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a105002-bfde-4161-b1e4-81cc7099dccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.703 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[f78a0edc-af4e-4b29-9e08-b70f36ba0395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.726 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[99fa740f-4510-40b1-865c-36dedaa5d1f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.747 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b524c3a2-9147-427e-b38e-0463528389cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.7560] manager: (tapbad2c851-10): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.754 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4e371f05-35ed-4973-b33f-f37c74b4fcef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.782 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[455645bd-7c92-4980-8c32-0e270fe8a201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.785 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc8041-9f79-4fd3-b8a2-f25afdba2390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.8047] device (tapbad2c851-10): carrier: link connected
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.808 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[71a8e946-e8c9-4777-b933-0bb8b4de8b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.823 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5a02dc94-7d80-47a1-8b74-39b28a055fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436413, 'reachable_time': 36250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268757, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116497798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.835 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[69901eee-37dc-417b-9e09-3a3a67ce5026]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:54c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436413, 'tstamp': 436413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268758, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.848 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.850 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f8078d-f26c-4e60-9b5d-665e1a1b1800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436413, 'reachable_time': 36250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268761, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.873 239853 DEBUG nova.compute.manager [req-ce3390b8-6ae0-4816-9624-78bfc5cc6c95 req-94cd0440-c244-4c67-921d-a8aa4846c3e9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.874 239853 DEBUG oslo_concurrency.lockutils [req-ce3390b8-6ae0-4816-9624-78bfc5cc6c95 req-94cd0440-c244-4c67-921d-a8aa4846c3e9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.874 239853 DEBUG oslo_concurrency.lockutils [req-ce3390b8-6ae0-4816-9624-78bfc5cc6c95 req-94cd0440-c244-4c67-921d-a8aa4846c3e9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.874 239853 DEBUG oslo_concurrency.lockutils [req-ce3390b8-6ae0-4816-9624-78bfc5cc6c95 req-94cd0440-c244-4c67-921d-a8aa4846c3e9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.874 239853 DEBUG nova.compute.manager [req-ce3390b8-6ae0-4816-9624-78bfc5cc6c95 req-94cd0440-c244-4c67-921d-a8aa4846c3e9 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Processing event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.880 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[201b4b7f-9d03-445d-b190-b0bf5fafca4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.912 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.912 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.930 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ca68fbf8-d725-41f5-ad2f-e681b2287f14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.932 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.932 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.933 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbad2c851-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.934 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 NetworkManager[49022]: <info>  [1770055122.9354] manager: (tapbad2c851-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Feb  2 12:58:42 np0005605476 kernel: tapbad2c851-10: entered promiscuous mode
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.940 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbad2c851-10, col_values=(('external_ids', {'iface-id': 'ad9a646b-a8d9-417d-9b26-cd7734bca07f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:42Z|00240|binding|INFO|Releasing lport ad9a646b-a8d9-417d-9b26-cd7734bca07f from this chassis (sb_readonly=0)
Feb  2 12:58:42 np0005605476 nova_compute[239846]: 2026-02-02 17:58:42.948 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.950 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.951 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[277cb1cf-d7e1-4326-aec5-f425d41cad27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.951 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:58:42 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:42.953 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'env', 'PROCESS_TAG=haproxy-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bad2c851-1c12-4a83-9873-6096fe5f4eec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.050 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.052 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4263MB free_disk=59.987781358882785GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.052 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.052 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.113 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance c88ba03a-1274-4c23-9615-70cad271dad9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.114 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.114 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.151 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2820777171' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2820777171' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:43 np0005605476 podman[268830]: 2026-02-02 17:58:43.316863728 +0000 UTC m=+0.104337330 container create 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:58:43 np0005605476 podman[268830]: 2026-02-02 17:58:43.231602966 +0000 UTC m=+0.019076578 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:58:43 np0005605476 systemd[1]: Started libpod-conmon-50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965.scope.
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1845436224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1845436224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:43 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:58:43 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5d591535c194c1326855fa06e00275318a72f17750cfdefc03b94cc68d69f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:58:43 np0005605476 podman[268830]: 2026-02-02 17:58:43.428850682 +0000 UTC m=+0.216324304 container init 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:58:43 np0005605476 podman[268830]: 2026-02-02 17:58:43.434435909 +0000 UTC m=+0.221909501 container start 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true)
Feb  2 12:58:43 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [NOTICE]   (268868) : New worker (268870) forked
Feb  2 12:58:43 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [NOTICE]   (268868) : Loading success.
Feb  2 12:58:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 385 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 26 KiB/s wr, 65 op/s
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:58:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534575441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.684 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.689 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.704 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.727 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:58:43 np0005605476 nova_compute[239846]: 2026-02-02 17:58:43.727 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:44 np0005605476 nova_compute[239846]: 2026-02-02 17:58:44.728 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:44 np0005605476 nova_compute[239846]: 2026-02-02 17:58:44.729 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:58:44 np0005605476 nova_compute[239846]: 2026-02-02 17:58:44.802 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 12:58:44 np0005605476 nova_compute[239846]: 2026-02-02 17:58:44.803 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:44 np0005605476 nova_compute[239846]: 2026-02-02 17:58:44.803 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.009 239853 DEBUG nova.compute.manager [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.009 239853 DEBUG oslo_concurrency.lockutils [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.010 239853 DEBUG oslo_concurrency.lockutils [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.010 239853 DEBUG oslo_concurrency.lockutils [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.010 239853 DEBUG nova.compute.manager [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] No waiting events found dispatching network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.010 239853 WARNING nova.compute.manager [req-7c00f422-44f2-48b7-89b1-4d4236d575fe req-5b811575-bfcc-4395-838b-b44125c9fb77 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received unexpected event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.132 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055125.1315746, c88ba03a-1274-4c23-9615-70cad271dad9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.133 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] VM Started (Lifecycle Event)#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.135 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.138 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.140 239853 INFO nova.virt.libvirt.driver [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Instance spawned successfully.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.141 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.152 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.157 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.160 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.161 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.161 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.161 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.162 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.162 239853 DEBUG nova.virt.libvirt.driver [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.185 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.185 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055125.1316996, c88ba03a-1274-4c23-9615-70cad271dad9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.186 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.218 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.220 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055125.137618, c88ba03a-1274-4c23-9615-70cad271dad9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.221 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.234 239853 INFO nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.235 239853 DEBUG nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.245 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.247 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.273 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.295 239853 INFO nova.compute.manager [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Took 8.44 seconds to build instance.#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.308 239853 DEBUG oslo_concurrency.lockutils [None req-2530229c-10ac-430b-b2aa-cb1c9fc04726 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:45 np0005605476 nova_compute[239846]: 2026-02-02 17:58:45.315 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 202 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 44 KiB/s wr, 167 op/s
Feb  2 12:58:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:46.651 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:58:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:46.652 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:58:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:58:46.652 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:58:46 np0005605476 nova_compute[239846]: 2026-02-02 17:58:46.886 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.077396745303572e-05 of space, bias 1.0, pg target 0.0032321902359107157 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002197910117444764 of space, bias 1.0, pg target 0.6593730352334292 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.0556907258850683e-06 of space, bias 1.0, pg target 0.0006167072177655205 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665065045265774 of space, bias 1.0, pg target 0.19995195135797322 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.722423062874422e-07 of space, bias 4.0, pg target 0.0011666907675449308 quantized to 16 (current 16)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 12:58:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 202 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 21 KiB/s wr, 122 op/s
Feb  2 12:58:48 np0005605476 nova_compute[239846]: 2026-02-02 17:58:48.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:49 np0005605476 nova_compute[239846]: 2026-02-02 17:58:49.257 239853 DEBUG nova.compute.manager [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-changed-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:58:49 np0005605476 nova_compute[239846]: 2026-02-02 17:58:49.258 239853 DEBUG nova.compute.manager [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Refreshing instance network info cache due to event network-changed-cbf4b62b-3e45-41de-b963-93333041132a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:58:49 np0005605476 nova_compute[239846]: 2026-02-02 17:58:49.258 239853 DEBUG oslo_concurrency.lockutils [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:58:49 np0005605476 nova_compute[239846]: 2026-02-02 17:58:49.258 239853 DEBUG oslo_concurrency.lockutils [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:58:49 np0005605476 nova_compute[239846]: 2026-02-02 17:58:49.259 239853 DEBUG nova.network.neutron [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Refreshing network info cache for port cbf4b62b-3e45-41de-b963-93333041132a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:58:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 202 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 802 KiB/s rd, 18 KiB/s wr, 115 op/s
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.240 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.317 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3780003294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Feb  2 12:58:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.871 239853 DEBUG nova.network.neutron [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updated VIF entry in instance network info cache for port cbf4b62b-3e45-41de-b963-93333041132a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.872 239853 DEBUG nova.network.neutron [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updating instance_info_cache with network_info: [{"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:58:50 np0005605476 nova_compute[239846]: 2026-02-02 17:58:50.891 239853 DEBUG oslo_concurrency.lockutils [req-d4ae2b7e-c4b7-4cb3-a8c0-a52ddb495363 req-f554c2ce-917b-48b1-bc3f-38921cdf624d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-c88ba03a-1274-4c23-9615-70cad271dad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:58:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 202 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 157 op/s
Feb  2 12:58:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Feb  2 12:58:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Feb  2 12:58:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Feb  2 12:58:51 np0005605476 nova_compute[239846]: 2026-02-02 17:58:51.889 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:53 np0005605476 nova_compute[239846]: 2026-02-02 17:58:53.289 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055118.2881255, 976b3ab3-0b37-4883-8fc0-b74a428132c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:58:53 np0005605476 nova_compute[239846]: 2026-02-02 17:58:53.290 239853 INFO nova.compute.manager [-] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:58:53 np0005605476 nova_compute[239846]: 2026-02-02 17:58:53.309 239853 DEBUG nova.compute.manager [None req-4c6e1956-3ec7-4b4c-9149-0df4a3aae606 - - - - - -] [instance: 976b3ab3-0b37-4883-8fc0-b74a428132c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:58:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 202 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 767 B/s wr, 88 op/s
Feb  2 12:58:55 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553272643' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:58:55 np0005605476 nova_compute[239846]: 2026-02-02 17:58:55.318 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 202 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Feb  2 12:58:55 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Feb  2 12:58:56 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 12:58:56 np0005605476 podman[268889]: 2026-02-02 17:58:56.600655146 +0000 UTC m=+0.046240193 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:58:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Feb  2 12:58:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Feb  2 12:58:56 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Feb  2 12:58:56 np0005605476 nova_compute[239846]: 2026-02-02 17:58:56.891 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:58:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:57Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:07:7d 10.100.0.6
Feb  2 12:58:57 np0005605476 ovn_controller[146041]: 2026-02-02T17:58:57Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:07:7d 10.100.0.6
Feb  2 12:58:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 202 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 94 op/s
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3381587548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:58:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3381587548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:58:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 250 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 9.1 MiB/s wr, 239 op/s
Feb  2 12:59:00 np0005605476 nova_compute[239846]: 2026-02-02 17:59:00.320 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:00 np0005605476 podman[268908]: 2026-02-02 17:59:00.611379176 +0000 UTC m=+0.061821763 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 12:59:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 271 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 8.8 MiB/s wr, 182 op/s
Feb  2 12:59:01 np0005605476 nova_compute[239846]: 2026-02-02 17:59:01.893 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:01 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:01 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2872773621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:02 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.04527589 +0000 UTC m=+0.048677231 container create 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:59:03 np0005605476 systemd[1]: Started libpod-conmon-08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b.scope.
Feb  2 12:59:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.027976863 +0000 UTC m=+0.031378224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.133939088 +0000 UTC m=+0.137340439 container init 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.14181172 +0000 UTC m=+0.145213061 container start 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.14538784 +0000 UTC m=+0.148789221 container attach 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 12:59:03 np0005605476 recursing_nash[269093]: 167 167
Feb  2 12:59:03 np0005605476 systemd[1]: libpod-08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b.scope: Deactivated successfully.
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.148695043 +0000 UTC m=+0.152096394 container died 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 12:59:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-4a11c5c162456a9f08543b0392b9a52926a8a05c131d35e9e0622142255645c3-merged.mount: Deactivated successfully.
Feb  2 12:59:03 np0005605476 podman[269077]: 2026-02-02 17:59:03.197504578 +0000 UTC m=+0.200905919 container remove 08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 12:59:03 np0005605476 systemd[1]: libpod-conmon-08be499eec3e51ce5b99808a233b68d0ab4a38a745441c1da7be4efd5c10602b.scope: Deactivated successfully.
Feb  2 12:59:03 np0005605476 podman[269118]: 2026-02-02 17:59:03.3406266 +0000 UTC m=+0.035638915 container create 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 12:59:03 np0005605476 systemd[1]: Started libpod-conmon-07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f.scope.
Feb  2 12:59:03 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:03 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:03 np0005605476 podman[269118]: 2026-02-02 17:59:03.32288423 +0000 UTC m=+0.017896545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:03 np0005605476 podman[269118]: 2026-02-02 17:59:03.431733256 +0000 UTC m=+0.126745581 container init 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 12:59:03 np0005605476 podman[269118]: 2026-02-02 17:59:03.442067817 +0000 UTC m=+0.137080122 container start 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 12:59:03 np0005605476 podman[269118]: 2026-02-02 17:59:03.446044939 +0000 UTC m=+0.141057234 container attach 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:59:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 271 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 7.6 MiB/s wr, 157 op/s
Feb  2 12:59:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Feb  2 12:59:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Feb  2 12:59:03 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Feb  2 12:59:03 np0005605476 wizardly_lewin[269135]: --> passed data devices: 0 physical, 3 LVM
Feb  2 12:59:03 np0005605476 wizardly_lewin[269135]: --> All data devices are unavailable
Feb  2 12:59:03 np0005605476 systemd[1]: libpod-07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f.scope: Deactivated successfully.
Feb  2 12:59:03 np0005605476 podman[269155]: 2026-02-02 17:59:03.912110936 +0000 UTC m=+0.027302790 container died 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-29bdcf7f77912fa615e064e79345f61a6f39e8c8633a26027ce7869e727f741d-merged.mount: Deactivated successfully.
Feb  2 12:59:03 np0005605476 podman[269155]: 2026-02-02 17:59:03.949715826 +0000 UTC m=+0.064907620 container remove 07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 12:59:03 np0005605476 systemd[1]: libpod-conmon-07fc4799056d43880cf4128fafefd531aabbdabc30fe1ecb7e28eeb1240bf36f.scope: Deactivated successfully.
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.330074819 +0000 UTC m=+0.037752904 container create 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:59:04 np0005605476 systemd[1]: Started libpod-conmon-6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140.scope.
Feb  2 12:59:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.397528579 +0000 UTC m=+0.105206654 container init 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.403009673 +0000 UTC m=+0.110687748 container start 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 12:59:04 np0005605476 wonderful_engelbart[269249]: 167 167
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.406225454 +0000 UTC m=+0.113903549 container attach 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:04 np0005605476 systemd[1]: libpod-6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140.scope: Deactivated successfully.
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.406772089 +0000 UTC m=+0.114450164 container died 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.314669675 +0000 UTC m=+0.022347770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:04 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e46f0b305c8914a816476af2219ca0a98a25918c125512dd90f78e74d07c14da-merged.mount: Deactivated successfully.
Feb  2 12:59:04 np0005605476 podman[269233]: 2026-02-02 17:59:04.441084336 +0000 UTC m=+0.148762411 container remove 6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:59:04 np0005605476 systemd[1]: libpod-conmon-6105b50fae9478b8d669feb6196dceadabb11fcdaccc4699c04d54a7de5d4140.scope: Deactivated successfully.
Feb  2 12:59:04 np0005605476 podman[269271]: 2026-02-02 17:59:04.558705329 +0000 UTC m=+0.030116069 container create 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:59:04 np0005605476 systemd[1]: Started libpod-conmon-31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57.scope.
Feb  2 12:59:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f10d1080ddcbe537f3f950bec54221b24379206688c30cb3c3a40c4d3eac00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f10d1080ddcbe537f3f950bec54221b24379206688c30cb3c3a40c4d3eac00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f10d1080ddcbe537f3f950bec54221b24379206688c30cb3c3a40c4d3eac00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f10d1080ddcbe537f3f950bec54221b24379206688c30cb3c3a40c4d3eac00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:04 np0005605476 podman[269271]: 2026-02-02 17:59:04.545685842 +0000 UTC m=+0.017096602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:04 np0005605476 podman[269271]: 2026-02-02 17:59:04.649760534 +0000 UTC m=+0.121171284 container init 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 12:59:04 np0005605476 podman[269271]: 2026-02-02 17:59:04.65602903 +0000 UTC m=+0.127439770 container start 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:59:04 np0005605476 podman[269271]: 2026-02-02 17:59:04.65958601 +0000 UTC m=+0.130996770 container attach 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.810 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.811 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.812 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.812 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.812 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.813 239853 INFO nova.compute.manager [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Terminating instance#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.814 239853 DEBUG nova.compute.manager [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:59:04 np0005605476 kernel: tapcbf4b62b-3e (unregistering): left promiscuous mode
Feb  2 12:59:04 np0005605476 NetworkManager[49022]: <info>  [1770055144.8581] device (tapcbf4b62b-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.865 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:04Z|00241|binding|INFO|Releasing lport cbf4b62b-3e45-41de-b963-93333041132a from this chassis (sb_readonly=0)
Feb  2 12:59:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:04Z|00242|binding|INFO|Setting lport cbf4b62b-3e45-41de-b963-93333041132a down in Southbound
Feb  2 12:59:04 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:04Z|00243|binding|INFO|Removing iface tapcbf4b62b-3e ovn-installed in OVS
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.868 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:04 np0005605476 nova_compute[239846]: 2026-02-02 17:59:04.878 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:04.879 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:07:7d 10.100.0.6'], port_security=['fa:16:3e:9b:07:7d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c88ba03a-1274-4c23-9615-70cad271dad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5c671fc8-95a7-4695-88ca-6053121c3610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=cbf4b62b-3e45-41de-b963-93333041132a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:59:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:04.881 155391 INFO neutron.agent.ovn.metadata.agent [-] Port cbf4b62b-3e45-41de-b963-93333041132a in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec unbound from our chassis#033[00m
Feb  2 12:59:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:04.883 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bad2c851-1c12-4a83-9873-6096fe5f4eec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:59:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:04.884 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[fe676c1e-c007-48fd-8e6d-4171e4375428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:04 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:04.884 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace which is not needed anymore#033[00m
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]: {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    "0": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "devices": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "/dev/loop3"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            ],
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_name": "ceph_lv0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_size": "21470642176",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "name": "ceph_lv0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "tags": {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_name": "ceph",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.crush_device_class": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.encrypted": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.objectstore": "bluestore",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_id": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.vdo": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.with_tpm": "0"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            },
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "vg_name": "ceph_vg0"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        }
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    ],
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    "1": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "devices": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "/dev/loop4"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            ],
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_name": "ceph_lv1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_size": "21470642176",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "name": "ceph_lv1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "tags": {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_name": "ceph",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.crush_device_class": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.encrypted": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.objectstore": "bluestore",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_id": "1",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.vdo": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.with_tpm": "0"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            },
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "vg_name": "ceph_vg1"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        }
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    ],
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    "2": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "devices": [
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "/dev/loop5"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            ],
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_name": "ceph_lv2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_size": "21470642176",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "name": "ceph_lv2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "tags": {
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cephx_lockbox_secret": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.cluster_name": "ceph",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.crush_device_class": "",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.encrypted": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.objectstore": "bluestore",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osd_id": "2",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.vdo": "0",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:                "ceph.with_tpm": "0"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            },
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "type": "block",
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:            "vg_name": "ceph_vg2"
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:        }
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]:    ]
Feb  2 12:59:04 np0005605476 objective_vaughan[269287]: }
Feb  2 12:59:04 np0005605476 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb  2 12:59:04 np0005605476 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 13.600s CPU time.
Feb  2 12:59:04 np0005605476 systemd-machined[208080]: Machine qemu-25-instance-00000019 terminated.
Feb  2 12:59:04 np0005605476 systemd[1]: libpod-31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57.scope: Deactivated successfully.
Feb  2 12:59:04 np0005605476 conmon[269287]: conmon 31d84817d36c24aefdf2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57.scope/container/memory.events
Feb  2 12:59:04 np0005605476 podman[269321]: 2026-02-02 17:59:04.971348502 +0000 UTC m=+0.020577051 container died 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 12:59:04 np0005605476 systemd[1]: var-lib-containers-storage-overlay-56f10d1080ddcbe537f3f950bec54221b24379206688c30cb3c3a40c4d3eac00-merged.mount: Deactivated successfully.
Feb  2 12:59:05 np0005605476 podman[269321]: 2026-02-02 17:59:05.004782823 +0000 UTC m=+0.054011342 container remove 31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728151994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:05 np0005605476 systemd[1]: libpod-conmon-31d84817d36c24aefdf22af1277021c74bd751cc8f1cfe2bb46d24520ee34a57.scope: Deactivated successfully.
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728151994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:05 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [NOTICE]   (268868) : haproxy version is 2.8.14-c23fe91
Feb  2 12:59:05 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [NOTICE]   (268868) : path to executable is /usr/sbin/haproxy
Feb  2 12:59:05 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [WARNING]  (268868) : Exiting Master process...
Feb  2 12:59:05 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [ALERT]    (268868) : Current worker (268870) exited with code 143 (Terminated)
Feb  2 12:59:05 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[268864]: [WARNING]  (268868) : All workers exited. Exiting... (0)
Feb  2 12:59:05 np0005605476 systemd[1]: libpod-50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965.scope: Deactivated successfully.
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.031 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 podman[269325]: 2026-02-02 17:59:05.034608843 +0000 UTC m=+0.076863616 container died 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.035 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.045 239853 INFO nova.virt.libvirt.driver [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Instance destroyed successfully.#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.046 239853 DEBUG nova.objects.instance [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'resources' on Instance uuid c88ba03a-1274-4c23-9615-70cad271dad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.058 239853 DEBUG nova.virt.libvirt.vif [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:58:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1258712006',display_name='tempest-TestEncryptedCinderVolumes-server-1258712006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1258712006',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-kv4n0fnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:58:45Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=c88ba03a-1274-4c23-9615-70cad271dad9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:59:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965-userdata-shm.mount: Deactivated successfully.
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.059 239853 DEBUG nova.network.os_vif_util [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "cbf4b62b-3e45-41de-b963-93333041132a", "address": "fa:16:3e:9b:07:7d", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf4b62b-3e", "ovs_interfaceid": "cbf4b62b-3e45-41de-b963-93333041132a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.060 239853 DEBUG nova.network.os_vif_util [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.061 239853 DEBUG os_vif [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.065 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.065 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf4b62b-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.069 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.072 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:59:05 np0005605476 podman[269325]: 2026-02-02 17:59:05.075622289 +0000 UTC m=+0.117877042 container cleanup 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.075 239853 INFO os_vif [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:07:7d,bridge_name='br-int',has_traffic_filtering=True,id=cbf4b62b-3e45-41de-b963-93333041132a,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf4b62b-3e')#033[00m
Feb  2 12:59:05 np0005605476 systemd[1]: libpod-conmon-50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965.scope: Deactivated successfully.
Feb  2 12:59:05 np0005605476 podman[269376]: 2026-02-02 17:59:05.141173405 +0000 UTC m=+0.047646783 container remove 50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.147 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[59561517-8abf-4858-b614-7cd5529a21b2]: (4, ('Mon Feb  2 05:59:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965)\n50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965\nMon Feb  2 05:59:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965)\n50758da755049c3457e153911d315e4f36ce668ee3800c0d250a3cf29fe2d965\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.149 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f137aa6-aa44-4820-9cf8-73e4b9d1ad9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.150 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:05 np0005605476 kernel: tapbad2c851-10: left promiscuous mode
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.152 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.159 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.161 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a4dac8-3190-4327-8c71-f6be8fddc94a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.174 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cf8205-28e6-47e1-aeb2-0d498678821f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.175 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[40b7252a-7d17-4b6a-870f-4594db82e248]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.186 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[df203b57-8812-4cc5-b8fb-8b0f92ff6fcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436407, 'reachable_time': 16919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269457, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.189 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:59:05 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:05.189 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[4e780c81-b4e6-4fd7-a538-bcb3a0679bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.211 239853 INFO nova.virt.libvirt.driver [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Deleting instance files /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9_del#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.212 239853 INFO nova.virt.libvirt.driver [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Deletion of /var/lib/nova/instances/c88ba03a-1274-4c23-9615-70cad271dad9_del complete#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.274 239853 INFO nova.compute.manager [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.275 239853 DEBUG oslo.service.loopingcall [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.275 239853 DEBUG nova.compute.manager [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.276 239853 DEBUG nova.network.neutron [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.322 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd5d591535c194c1326855fa06e00275318a72f17750cfdefc03b94cc68d69f1-merged.mount: Deactivated successfully.
Feb  2 12:59:05 np0005605476 systemd[1]: run-netns-ovnmeta\x2dbad2c851\x2d1c12\x2d4a83\x2d9873\x2d6096fe5f4eec.mount: Deactivated successfully.
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.471312794 +0000 UTC m=+0.076124515 container create e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.411615443 +0000 UTC m=+0.016427184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:05 np0005605476 systemd[1]: Started libpod-conmon-e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad.scope.
Feb  2 12:59:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.548857318 +0000 UTC m=+0.153669119 container init e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.555142985 +0000 UTC m=+0.159954706 container start e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.558368156 +0000 UTC m=+0.163179877 container attach e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:59:05 np0005605476 kind_merkle[269490]: 167 167
Feb  2 12:59:05 np0005605476 systemd[1]: libpod-e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad.scope: Deactivated successfully.
Feb  2 12:59:05 np0005605476 conmon[269490]: conmon e03518132fa29bfc8873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad.scope/container/memory.events
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.562451411 +0000 UTC m=+0.167263172 container died e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 12:59:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d74ae6faec69e98296f65bf3b939bc3d9f4cf9ca3eb4e717d6c0a2460b468cb4-merged.mount: Deactivated successfully.
Feb  2 12:59:05 np0005605476 podman[269473]: 2026-02-02 17:59:05.599041372 +0000 UTC m=+0.203853123 container remove e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 12:59:05 np0005605476 systemd[1]: libpod-conmon-e03518132fa29bfc8873921da601bbf49b9993aec7fc803610f45bbc489dacad.scope: Deactivated successfully.
Feb  2 12:59:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 271 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 7.7 MiB/s wr, 191 op/s
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:05 np0005605476 podman[269512]: 2026-02-02 17:59:05.723195549 +0000 UTC m=+0.039584166 container create b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 12:59:05 np0005605476 systemd[1]: Started libpod-conmon-b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07.scope.
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Feb  2 12:59:05 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Feb  2 12:59:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd4d6c5f34699dcefc29e12afc79cd77e5f31b02cdf13060c204704d953d79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd4d6c5f34699dcefc29e12afc79cd77e5f31b02cdf13060c204704d953d79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd4d6c5f34699dcefc29e12afc79cd77e5f31b02cdf13060c204704d953d79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd4d6c5f34699dcefc29e12afc79cd77e5f31b02cdf13060c204704d953d79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:05 np0005605476 podman[269512]: 2026-02-02 17:59:05.707995201 +0000 UTC m=+0.024383838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 12:59:05 np0005605476 podman[269512]: 2026-02-02 17:59:05.807004329 +0000 UTC m=+0.123392976 container init b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:59:05 np0005605476 podman[269512]: 2026-02-02 17:59:05.813922814 +0000 UTC m=+0.130311441 container start b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 12:59:05 np0005605476 podman[269512]: 2026-02-02 17:59:05.817925917 +0000 UTC m=+0.134314544 container attach b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.836 239853 DEBUG nova.compute.manager [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-unplugged-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.837 239853 DEBUG oslo_concurrency.lockutils [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.837 239853 DEBUG oslo_concurrency.lockutils [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.838 239853 DEBUG oslo_concurrency.lockutils [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.838 239853 DEBUG nova.compute.manager [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] No waiting events found dispatching network-vif-unplugged-cbf4b62b-3e45-41de-b963-93333041132a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:05 np0005605476 nova_compute[239846]: 2026-02-02 17:59:05.838 239853 DEBUG nova.compute.manager [req-b66fcbce-adfb-4cd1-a61f-db3695f97e60 req-895f8385-3455-4d16-b085-3f52ae3d28cc e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-unplugged-cbf4b62b-3e45-41de-b963-93333041132a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390324589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390324589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:06 np0005605476 lvm[269605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 12:59:06 np0005605476 lvm[269605]: VG ceph_vg0 finished
Feb  2 12:59:06 np0005605476 lvm[269607]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 12:59:06 np0005605476 lvm[269607]: VG ceph_vg1 finished
Feb  2 12:59:06 np0005605476 lvm[269608]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 12:59:06 np0005605476 lvm[269608]: VG ceph_vg2 finished
Feb  2 12:59:06 np0005605476 competent_haslett[269529]: {}
Feb  2 12:59:06 np0005605476 systemd[1]: libpod-b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07.scope: Deactivated successfully.
Feb  2 12:59:06 np0005605476 podman[269512]: 2026-02-02 17:59:06.501241074 +0000 UTC m=+0.817629691 container died b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 12:59:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e7fd4d6c5f34699dcefc29e12afc79cd77e5f31b02cdf13060c204704d953d79-merged.mount: Deactivated successfully.
Feb  2 12:59:06 np0005605476 podman[269512]: 2026-02-02 17:59:06.538421451 +0000 UTC m=+0.854810068 container remove b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 12:59:06 np0005605476 systemd[1]: libpod-conmon-b9190dc4a970f61b872c008b5ef3646c09d11ac4663e2808b47b5cec5414cb07.scope: Deactivated successfully.
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:06 np0005605476 nova_compute[239846]: 2026-02-02 17:59:06.701 239853 DEBUG nova.network.neutron [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:06 np0005605476 nova_compute[239846]: 2026-02-02 17:59:06.723 239853 INFO nova.compute.manager [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Took 1.45 seconds to deallocate network for instance.#033[00m
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:06 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 12:59:06 np0005605476 nova_compute[239846]: 2026-02-02 17:59:06.894 239853 INFO nova.compute.manager [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:59:06 np0005605476 nova_compute[239846]: 2026-02-02 17:59:06.947 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:06 np0005605476 nova_compute[239846]: 2026-02-02 17:59:06.948 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.013 239853 DEBUG oslo_concurrency.processutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401190739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.539 239853 DEBUG oslo_concurrency.processutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.544 239853 DEBUG nova.compute.provider_tree [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 271 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 36 KiB/s wr, 38 op/s
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.720 239853 DEBUG nova.scheduler.client.report [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.747 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.783 239853 INFO nova.scheduler.client.report [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Deleted allocations for instance c88ba03a-1274-4c23-9615-70cad271dad9#033[00m
Feb  2 12:59:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Feb  2 12:59:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Feb  2 12:59:07 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.863 239853 DEBUG oslo_concurrency.lockutils [None req-69289bec-1321-420b-ac17-13910fe74380 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.940 239853 DEBUG nova.compute.manager [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.940 239853 DEBUG oslo_concurrency.lockutils [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.941 239853 DEBUG oslo_concurrency.lockutils [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.941 239853 DEBUG oslo_concurrency.lockutils [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "c88ba03a-1274-4c23-9615-70cad271dad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.941 239853 DEBUG nova.compute.manager [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] No waiting events found dispatching network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.941 239853 WARNING nova.compute.manager [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received unexpected event network-vif-plugged-cbf4b62b-3e45-41de-b963-93333041132a for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:59:07 np0005605476 nova_compute[239846]: 2026-02-02 17:59:07.942 239853 DEBUG nova.compute.manager [req-9d7ccaa3-95a0-48e6-b3e3-a62d3471c557 req-c6404416-903d-41f3-95b7-58a1d2ef7535 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Received event network-vif-deleted-cbf4b62b-3e45-41de-b963-93333041132a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Feb  2 12:59:09 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Feb  2 12:59:09 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Feb  2 12:59:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 271 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 39 KiB/s wr, 122 op/s
Feb  2 12:59:10 np0005605476 nova_compute[239846]: 2026-02-02 17:59:10.069 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Feb  2 12:59:10 np0005605476 nova_compute[239846]: 2026-02-02 17:59:10.323 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:10 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1486238356' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 271 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 11 KiB/s wr, 259 op/s
Feb  2 12:59:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Feb  2 12:59:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Feb  2 12:59:11 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Feb  2 12:59:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 271 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 9.6 KiB/s wr, 227 op/s
Feb  2 12:59:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Feb  2 12:59:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Feb  2 12:59:13 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Feb  2 12:59:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054920934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054920934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.553 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.553 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.571 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.634 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.634 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.640 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.640 239853 INFO nova.compute.claims [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.733 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.924 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.924 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:14 np0005605476 nova_compute[239846]: 2026-02-02 17:59:14.949 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.071 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.074 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956770133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.278 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.283 239853 DEBUG nova.compute.provider_tree [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.303 239853 DEBUG nova.scheduler.client.report [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.324 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.326 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.328 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.330 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.337 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.338 239853 INFO nova.compute.claims [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.420 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.421 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.508 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.525 239853 INFO nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.566 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.642 239853 INFO nova.virt.block_device [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Booting with volume 5287c93c-b6cd-44e8-af49-41bb12bcc421 at /dev/vda#033[00m
Feb  2 12:59:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 20 MiB/s wr, 285 op/s
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e451 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Feb  2 12:59:15 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.817 239853 DEBUG os_brick.utils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.818 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.826 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.827 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[2243c4af-4c93-4635-adfa-a45b0a46774d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.828 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.833 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.833 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e67bd3-3f94-4ec8-9a07-43b75aa527ba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.834 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.839 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.839 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0a635c42-aee4-4e9d-a477-87656c835b28]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.840 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3a0b9e-456f-43e7-b122-27303ccad07a]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.840 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.858 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.860 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.860 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.860 239853 DEBUG os_brick.initiator.connectors.lightos [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.861 239853 DEBUG os_brick.utils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:59:15 np0005605476 nova_compute[239846]: 2026-02-02 17:59:15.861 239853 DEBUG nova.virt.block_device [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating existing volume attachment record: b2b4904d-5ebf-48ce-a470-b0b39deb0e80 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:59:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237532783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.044 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.050 239853 DEBUG nova.compute.provider_tree [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.073 239853 DEBUG nova.scheduler.client.report [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.116 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.116 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.212 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.212 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.270 239853 INFO nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.318 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.371 239853 INFO nova.virt.block_device [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Booting with volume 08640039-7618-4ae4-95c5-1f173b2afdda at /dev/vda#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.549 239853 DEBUG os_brick.utils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.550 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.558 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.558 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[30d0b3af-0815-40db-9eac-d535d8c471ec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.559 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.565 239853 DEBUG nova.policy [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3de5c2f3ec44d4684754f1707ba5236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.565 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.565 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[beb8f8a7-f6d0-43a8-8869-31204d4e7c27]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.568 239853 DEBUG nova.policy [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c00d8fbb7f314affbdd560b88d4ce236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.571 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.577 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.577 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[6d352a4f-da2f-46bb-8079-30a5eb153dd6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.578 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[e581a078-4713-4911-8c51-3025521be281]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.579 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.591 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "nvme version" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.593 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.593 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.593 239853 DEBUG os_brick.initiator.connectors.lightos [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.594 239853 DEBUG os_brick.utils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.594 239853 DEBUG nova.virt.block_device [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updating existing volume attachment record: b0d70ef2-af9d-421a-ac64-eed58e073382 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:59:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/141473058' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.977 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.979 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.979 239853 INFO nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Creating image(s)#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.980 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.980 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Ensure instance console log exists: /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.980 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.981 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:16 np0005605476 nova_compute[239846]: 2026-02-02 17:59:16.981 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/182440795' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:17.360 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:59:17 np0005605476 nova_compute[239846]: 2026-02-02 17:59:17.361 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:17.362 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 12:59:17 np0005605476 nova_compute[239846]: 2026-02-02 17:59:17.566 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Successfully created port: 28f46804-2246-4d92-95c9-bce2c6c02fcc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:59:17 np0005605476 nova_compute[239846]: 2026-02-02 17:59:17.575 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Successfully created port: 8d0898bf-146f-4e2a-a034-a03af27ec188 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:59:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 19 MiB/s wr, 172 op/s
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.332 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.334 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.335 239853 INFO nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Creating image(s)#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.336 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.336 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Ensure instance console log exists: /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.337 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.337 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.337 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.732 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Successfully updated port: 8d0898bf-146f-4e2a-a034-a03af27ec188 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.749 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.750 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquired lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.750 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.905 239853 DEBUG nova.compute.manager [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-changed-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.906 239853 DEBUG nova.compute.manager [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Refreshing instance network info cache due to event network-changed-8d0898bf-146f-4e2a-a034-a03af27ec188. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.906 239853 DEBUG oslo_concurrency.lockutils [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.961 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:59:18 np0005605476 nova_compute[239846]: 2026-02-02 17:59:18.994 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Successfully updated port: 28f46804-2246-4d92-95c9-bce2c6c02fcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.010 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.010 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquired lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.010 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.117 239853 DEBUG nova.compute.manager [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-changed-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.118 239853 DEBUG nova.compute.manager [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Refreshing instance network info cache due to event network-changed-28f46804-2246-4d92-95c9-bce2c6c02fcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.118 239853 DEBUG oslo_concurrency.lockutils [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.403 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:59:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 14 MiB/s wr, 129 op/s
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.914 239853 DEBUG nova.network.neutron [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updating instance_info_cache with network_info: [{"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.933 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Releasing lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.933 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Instance network_info: |[{"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.934 239853 DEBUG oslo_concurrency.lockutils [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.934 239853 DEBUG nova.network.neutron [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Refreshing network info cache for port 8d0898bf-146f-4e2a-a034-a03af27ec188 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.937 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Start _get_guest_xml network_info=[{"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': 'b0d70ef2-af9d-421a-ac64-eed58e073382', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '12f9d4e5-d748-4c22-946c-6e2ff0470f3e', 'attached_at': '', 'detached_at': '', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'serial': '08640039-7618-4ae4-95c5-1f173b2afdda'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.941 239853 WARNING nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.945 239853 DEBUG nova.virt.libvirt.host [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.945 239853 DEBUG nova.virt.libvirt.host [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.948 239853 DEBUG nova.virt.libvirt.host [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.949 239853 DEBUG nova.virt.libvirt.host [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.950 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.950 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.950 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.951 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.952 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.952 239853 DEBUG nova.virt.hardware [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.976 239853 DEBUG nova.storage.rbd_utils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:19 np0005605476 nova_compute[239846]: 2026-02-02 17:59:19.980 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.042 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055145.0420237, c88ba03a-1274-4c23-9615-70cad271dad9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.043 239853 INFO nova.compute.manager [-] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] VM Stopped (Lifecycle Event)#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.074 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.078 239853 DEBUG nova.compute.manager [None req-f82477eb-a3cf-40ac-97ea-91ccf1468a4a - - - - - -] [instance: c88ba03a-1274-4c23-9615-70cad271dad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.327 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.349 239853 DEBUG nova.network.neutron [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating instance_info_cache with network_info: [{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.372 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Releasing lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.373 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Instance network_info: |[{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.373 239853 DEBUG oslo_concurrency.lockutils [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.373 239853 DEBUG nova.network.neutron [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Refreshing network info cache for port 28f46804-2246-4d92-95c9-bce2c6c02fcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.376 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Start _get_guest_xml network_info=[{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': 'b2b4904d-5ebf-48ce-a470-b0b39deb0e80', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5287c93c-b6cd-44e8-af49-41bb12bcc421', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5287c93c-b6cd-44e8-af49-41bb12bcc421', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae', 'attached_at': '', 'detached_at': '', 'volume_id': '5287c93c-b6cd-44e8-af49-41bb12bcc421', 'serial': '5287c93c-b6cd-44e8-af49-41bb12bcc421'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.380 239853 WARNING nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.383 239853 DEBUG nova.virt.libvirt.host [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.384 239853 DEBUG nova.virt.libvirt.host [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.387 239853 DEBUG nova.virt.libvirt.host [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.387 239853 DEBUG nova.virt.libvirt.host [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.388 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.388 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.388 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.389 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.389 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.389 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.390 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.390 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.390 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.390 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.391 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.391 239853 DEBUG nova.virt.hardware [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.413 239853 DEBUG nova.storage.rbd_utils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.417 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953887781' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.520 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.757 239853 DEBUG os_brick.encryptors [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Using volume encryption metadata '{'encryption_key_id': '24172b1a-7275-4b66-aed0-7b208e0b2ecb', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '12f9d4e5-d748-4c22-946c-6e2ff0470f3e', 'attached_at': '', 'detached_at': '', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.759 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.777 239853 DEBUG barbicanclient.v1.secrets [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.778 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.798 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.799 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.821 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.822 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.845 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.846 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.874 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.875 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.904 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.904 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888810389' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.925 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.926 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.929 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.945 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:20 np0005605476 nova_compute[239846]: 2026-02-02 17:59:20.946 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.052 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.053 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.075 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.076 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.095 239853 DEBUG os_brick.encryptors [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Using volume encryption metadata '{'encryption_key_id': 'ac688f85-dc1b-4e66-ac1d-7db637e48495', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5287c93c-b6cd-44e8-af49-41bb12bcc421', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5287c93c-b6cd-44e8-af49-41bb12bcc421', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae', 'attached_at': '', 'detached_at': '', 'volume_id': '5287c93c-b6cd-44e8-af49-41bb12bcc421', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.097 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.113 239853 DEBUG barbicanclient.v1.secrets [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.114 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.146 239853 DEBUG nova.network.neutron [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updated VIF entry in instance network info cache for port 8d0898bf-146f-4e2a-a034-a03af27ec188. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.147 239853 DEBUG nova.network.neutron [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updating instance_info_cache with network_info: [{"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.156 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.157 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.168 239853 DEBUG oslo_concurrency.lockutils [req-22533ec4-aa92-40dc-bc01-f19ecd45dd4e req-02b70e7c-f147-4f4b-8359-2773e8003d86 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.178 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.179 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.184 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.185 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.212 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.213 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.214 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.215 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.236 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.236 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.238 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.238 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.261 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.261 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.267 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.268 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.282 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.283 239853 INFO barbicanclient.base [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/24172b1a-7275-4b66-aed0-7b208e0b2ecb#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.295 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.295 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.310 239853 DEBUG barbicanclient.client [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.311 239853 DEBUG nova.virt.libvirt.host [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <volume>08640039-7618-4ae4-95c5-1f173b2afdda</volume>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.316 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.317 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.337 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.338 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.342 239853 DEBUG nova.virt.libvirt.vif [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-338830185',display_name='tempest-TransferEncryptedVolumeTest-server-338830185',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-338830185',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-dj9hoeaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:16Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=12f9d4e5-d748-4c22-946c-6e2ff0470f3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.343 239853 DEBUG nova.network.os_vif_util [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.344 239853 DEBUG nova.network.os_vif_util [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.346 239853 DEBUG nova.objects.instance [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12f9d4e5-d748-4c22-946c-6e2ff0470f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.361 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <uuid>12f9d4e5-d748-4c22-946c-6e2ff0470f3e</uuid>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <name>instance-0000001b</name>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-338830185</nova:name>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:59:19</nova:creationTime>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:user uuid="a3de5c2f3ec44d4684754f1707ba5236">tempest-TransferEncryptedVolumeTest-1386167090-project-member</nova:user>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:project uuid="224fb1fcaf0e4ffb9c3e3e7792ff25c6">tempest-TransferEncryptedVolumeTest-1386167090</nova:project>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:port uuid="8d0898bf-146f-4e2a-a034-a03af27ec188">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="serial">12f9d4e5-d748-4c22-946c-6e2ff0470f3e</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="uuid">12f9d4e5-d748-4c22-946c-6e2ff0470f3e</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <serial>08640039-7618-4ae4-95c5-1f173b2afdda</serial>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="e17f3f14-78cd-4e13-a1a1-11beeb759216"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:ca:62:c9"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="tap8d0898bf-14"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/console.log" append="off"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.362 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Preparing to wait for external event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.362 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.363 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.363 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.364 239853 DEBUG nova.virt.libvirt.vif [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-338830185',display_name='tempest-TransferEncryptedVolumeTest-server-338830185',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-338830185',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-dj9hoeaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:16Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=12f9d4e5-d748-4c22-946c-6e2ff0470f3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.364 239853 DEBUG nova.network.os_vif_util [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.365 239853 DEBUG nova.network.os_vif_util [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.365 239853 DEBUG os_vif [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.367 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.367 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.369 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.369 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.369 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.373 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.373 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d0898bf-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.374 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d0898bf-14, col_values=(('external_ids', {'iface-id': '8d0898bf-146f-4e2a-a034-a03af27ec188', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:62:c9', 'vm-uuid': '12f9d4e5-d748-4c22-946c-6e2ff0470f3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.375 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 NetworkManager[49022]: <info>  [1770055161.3768] manager: (tap8d0898bf-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.378 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.381 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.381 239853 INFO os_vif [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14')#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.395 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.395 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.414 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.414 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.432 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.434 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.437 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.438 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.438 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No VIF found with MAC fa:16:3e:ca:62:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.438 239853 INFO nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Using config drive#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.460 239853 DEBUG nova.storage.rbd_utils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.467 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.468 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.492 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.493 239853 INFO barbicanclient.base [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Calculated Secrets uuid ref: secrets/ac688f85-dc1b-4e66-ac1d-7db637e48495#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.519 239853 DEBUG barbicanclient.client [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.520 239853 DEBUG nova.virt.libvirt.host [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <volume>5287c93c-b6cd-44e8-af49-41bb12bcc421</volume>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.546 239853 DEBUG nova.virt.libvirt.vif [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1288242109',display_name='tempest-TestEncryptedCinderVolumes-server-1288242109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1288242109',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-uosun12z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:15Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.547 239853 DEBUG nova.network.os_vif_util [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.549 239853 DEBUG nova.network.os_vif_util [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.550 239853 DEBUG nova.objects.instance [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.562 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] End _get_guest_xml xml=<domain type="kvm">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <uuid>ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae</uuid>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <name>instance-0000001a</name>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1288242109</nova:name>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:59:20</nova:creationTime>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:user uuid="c00d8fbb7f314affbdd560b88d4ce236">tempest-TestEncryptedCinderVolumes-1563506128-project-member</nova:user>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:project uuid="f1ccd20d4c994d098fc29da09fe94797">tempest-TestEncryptedCinderVolumes-1563506128</nova:project>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <nova:port uuid="28f46804-2246-4d92-95c9-bce2c6c02fcc">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <system>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="serial">ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="uuid">ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </system>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <os>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </os>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <features>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </features>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </clock>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  <devices>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-5287c93c-b6cd-44e8-af49-41bb12bcc421">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </source>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </auth>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <serial>5287c93c-b6cd-44e8-af49-41bb12bcc421</serial>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="29658c71-7d07-4a2c-a35d-62624251a4d5"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </disk>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:09:b1:6a"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <target dev="tap28f46804-22"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </interface>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/console.log" append="off"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </serial>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <video>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </video>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </rng>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 12:59:21 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 12:59:21 np0005605476 nova_compute[239846]:  </devices>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: </domain>
Feb  2 12:59:21 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.563 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Preparing to wait for external event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.563 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.563 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.563 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.564 239853 DEBUG nova.virt.libvirt.vif [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1288242109',display_name='tempest-TestEncryptedCinderVolumes-server-1288242109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1288242109',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-uosun12z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:15Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.564 239853 DEBUG nova.network.os_vif_util [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.565 239853 DEBUG nova.network.os_vif_util [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.565 239853 DEBUG os_vif [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.565 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.566 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.566 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.567 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.568 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f46804-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.568 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28f46804-22, col_values=(('external_ids', {'iface-id': '28f46804-2246-4d92-95c9-bce2c6c02fcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:b1:6a', 'vm-uuid': 'ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.569 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 NetworkManager[49022]: <info>  [1770055161.5703] manager: (tap28f46804-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.571 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.574 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.575 239853 INFO os_vif [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22')#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.619 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.619 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.619 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] No VIF found with MAC fa:16:3e:09:b1:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.620 239853 INFO nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Using config drive#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.634 239853 DEBUG nova.storage.rbd_utils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 14 MiB/s wr, 130 op/s
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.818 239853 INFO nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Creating config drive at /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.826 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpd66z2s6d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.857 239853 DEBUG nova.network.neutron [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updated VIF entry in instance network info cache for port 28f46804-2246-4d92-95c9-bce2c6c02fcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.858 239853 DEBUG nova.network.neutron [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating instance_info_cache with network_info: [{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.878 239853 DEBUG oslo_concurrency.lockutils [req-38ce1ea0-17d6-4cb1-8b85-97c572413656 req-547f0455-88a1-40c2-8414-ef347b84918e e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.960 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpd66z2s6d" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.988 239853 DEBUG nova.storage.rbd_utils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:21 np0005605476 nova_compute[239846]: 2026-02-02 17:59:21.992 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config 12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.017 239853 INFO nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Creating config drive at /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.022 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx96cttr8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.114 239853 DEBUG oslo_concurrency.processutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config 12f9d4e5-d748-4c22-946c-6e2ff0470f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.115 239853 INFO nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Deleting local config drive /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e/disk.config because it was imported into RBD.#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.1520] manager: (tap8d0898bf-14): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.151 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx96cttr8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:22 np0005605476 kernel: tap8d0898bf-14: entered promiscuous mode
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00244|binding|INFO|Claiming lport 8d0898bf-146f-4e2a-a034-a03af27ec188 for this chassis.
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00245|binding|INFO|8d0898bf-146f-4e2a-a034-a03af27ec188: Claiming fa:16:3e:ca:62:c9 10.100.0.12
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.162 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:62:c9 10.100.0.12'], port_security=['fa:16:3e:ca:62:c9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12f9d4e5-d748-4c22-946c-6e2ff0470f3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2cd3f756-a435-48cd-8232-7783559a028a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=8d0898bf-146f-4e2a-a034-a03af27ec188) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.163 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 8d0898bf-146f-4e2a-a034-a03af27ec188 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 bound to our chassis#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.164 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82a7f311-fed2-4a09-8203-270dceb25c76#033[00m
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00246|binding|INFO|Setting lport 8d0898bf-146f-4e2a-a034-a03af27ec188 ovn-installed in OVS
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00247|binding|INFO|Setting lport 8d0898bf-146f-4e2a-a034-a03af27ec188 up in Southbound
Feb  2 12:59:22 np0005605476 systemd-machined[208080]: New machine qemu-26-instance-0000001b.
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.177 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[160e4794-f8b0-4588-9f1c-3274456e4bdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.178 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82a7f311-f1 in ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.180 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82a7f311-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.180 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[10a9f0be-f024-4578-a03c-f2f88f0df205]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.181 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e810e1b-f9d0-4e38-bf2e-c19c761b426a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.184 239853 DEBUG nova.storage.rbd_utils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] rbd image ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:22 np0005605476 systemd[1]: Started Virtual Machine qemu-26-instance-0000001b.
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.192 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db8661-6aa3-4749-9dba-4d89c59988aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.194 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:22 np0005605476 systemd-udevd[269927]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.204 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8f793811-c57c-49fe-8329-7fc44448d299]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.2064] device (tap8d0898bf-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.2068] device (tap8d0898bf-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.211 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.224 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[d6056bf3-4b59-4b96-a88f-41cc825b6728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.2295] manager: (tap82a7f311-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.228 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[8462a445-b0f1-40ec-9413-73857378b01f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 systemd-udevd[269931]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.250 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[016d929e-20fd-4035-a77d-5207518742fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.254 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b8881c-601b-45f0-a082-2d6ca32336e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.2677] device (tap82a7f311-f0): carrier: link connected
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.270 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[f8378a6b-60fb-4ef6-9f1f-2ce0a8be6e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.281 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[066742c7-b9b9-4077-9fab-0c182e3f084e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440359, 'reachable_time': 42000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269980, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.291 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee56ba5-bbfc-4f7a-ab86-41aaa6a4cfbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:34d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440359, 'tstamp': 440359}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269981, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.301 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[76df7d0c-c133-4557-8490-5db230025889]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440359, 'reachable_time': 42000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269982, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.314 239853 DEBUG oslo_concurrency.processutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.314 239853 INFO nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Deleting local config drive /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae/disk.config because it was imported into RBD.#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.322 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a45aeee1-6f15-48ba-b1da-3f88f25eeb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 kernel: tap28f46804-22: entered promiscuous mode
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.3588] manager: (tap28f46804-22): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00248|binding|INFO|Claiming lport 28f46804-2246-4d92-95c9-bce2c6c02fcc for this chassis.
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00249|binding|INFO|28f46804-2246-4d92-95c9-bce2c6c02fcc: Claiming fa:16:3e:09:b1:6a 10.100.0.8
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.362 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.366 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b1:6a 10.100.0.8'], port_security=['fa:16:3e:09:b1:6a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5c671fc8-95a7-4695-88ca-6053121c3610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=28f46804-2246-4d92-95c9-bce2c6c02fcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00250|binding|INFO|Setting lport 28f46804-2246-4d92-95c9-bce2c6c02fcc ovn-installed in OVS
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00251|binding|INFO|Setting lport 28f46804-2246-4d92-95c9-bce2c6c02fcc up in Southbound
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.367 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.367 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.371 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.3732] device (tap28f46804-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.3737] device (tap28f46804-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.377 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[bacc3545-7006-4c43-8c18-23ded5b19a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.378 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.378 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.378 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82a7f311-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:22 np0005605476 systemd-machined[208080]: New machine qemu-27-instance-0000001a.
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.380 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.3826] manager: (tap82a7f311-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Feb  2 12:59:22 np0005605476 kernel: tap82a7f311-f0: entered promiscuous mode
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.384 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82a7f311-f0, col_values=(('external_ids', {'iface-id': '51e5cd2d-8b15-4de8-985f-c87fe41124e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.383 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.385 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:22Z|00252|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.391 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.392 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.393 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.394 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f8378c09-fb20-4247-a0f4-32182b97c06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.394 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.395 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'env', 'PROCESS_TAG=haproxy-82a7f311-fed2-4a09-8203-270dceb25c76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82a7f311-fed2-4a09-8203-270dceb25c76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:59:22 np0005605476 systemd[1]: Started Virtual Machine qemu-27-instance-0000001a.
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.484 239853 DEBUG nova.compute.manager [req-01606c4e-501b-4fe4-a55d-b9e13c2cd7e1 req-80b807ee-0280-422f-b1ce-0431611f3ad7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.485 239853 DEBUG oslo_concurrency.lockutils [req-01606c4e-501b-4fe4-a55d-b9e13c2cd7e1 req-80b807ee-0280-422f-b1ce-0431611f3ad7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.485 239853 DEBUG oslo_concurrency.lockutils [req-01606c4e-501b-4fe4-a55d-b9e13c2cd7e1 req-80b807ee-0280-422f-b1ce-0431611f3ad7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.485 239853 DEBUG oslo_concurrency.lockutils [req-01606c4e-501b-4fe4-a55d-b9e13c2cd7e1 req-80b807ee-0280-422f-b1ce-0431611f3ad7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:22 np0005605476 nova_compute[239846]: 2026-02-02 17:59:22.485 239853 DEBUG nova.compute.manager [req-01606c4e-501b-4fe4-a55d-b9e13c2cd7e1 req-80b807ee-0280-422f-b1ce-0431611f3ad7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Processing event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:59:22 np0005605476 podman[270035]: 2026-02-02 17:59:22.718565102 +0000 UTC m=+0.046071449 container create 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 12:59:22 np0005605476 systemd[1]: Started libpod-conmon-10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564.scope.
Feb  2 12:59:22 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:22 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8e40b599e2ad1f4caf701552f3e6d495ecabc4ed44ed5467b4c5e57d1c97ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:22 np0005605476 podman[270035]: 2026-02-02 17:59:22.697843978 +0000 UTC m=+0.025350355 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:59:22 np0005605476 podman[270035]: 2026-02-02 17:59:22.80194267 +0000 UTC m=+0.129449017 container init 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:59:22 np0005605476 podman[270035]: 2026-02-02 17:59:22.808413342 +0000 UTC m=+0.135919689 container start 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb  2 12:59:22 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [NOTICE]   (270126) : New worker (270128) forked
Feb  2 12:59:22 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [NOTICE]   (270126) : Loading success.
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.864 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 28f46804-2246-4d92-95c9-bce2c6c02fcc in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec unbound from our chassis#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.865 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bad2c851-1c12-4a83-9873-6096fe5f4eec#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.874 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5d209a-48a8-4ac0-9bbe-978dee6171de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.875 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbad2c851-11 in ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.877 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbad2c851-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.877 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5077f9ae-334c-4fde-ba57-41143324a6f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.878 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[905888d3-eae4-4e2c-85cf-d5d8712ac0d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.887 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[97a26281-c77d-4e5f-bdfd-256e339f27da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.896 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[95b57fec-8bcb-4a33-99c9-bee73d8a2355]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.915 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[a386514a-e2b7-4544-9306-67e63ad39b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.918 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b4c171-cf2c-42df-81be-4f0a3ed56d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.9227] manager: (tapbad2c851-10): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.942 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[a1910291-500c-4610-b205-b0db8a54fcbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.944 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[104b6d59-6b52-4907-b21e-885a1e75b0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 NetworkManager[49022]: <info>  [1770055162.9588] device (tapbad2c851-10): carrier: link connected
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.962 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[0b40cb26-14ff-4ba7-a42a-497f1c1d744f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.975 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f188db8b-aca5-44cf-8cf7-43b94fc2cfd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440429, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270147, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.983 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[318bea76-0227-4803-a924-75835345987a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:54c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440429, 'tstamp': 440429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270148, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:22 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:22.997 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ae0d16-a49d-4962-a52e-4e073ef92bb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbad2c851-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:54:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440429, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270149, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.021 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[eac6b9db-88de-4a8f-9190-f27f6ca1515f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.062 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[744a1458-7035-4bad-a295-0e2f3774a3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.063 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.064 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.064 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbad2c851-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:23 np0005605476 kernel: tapbad2c851-10: entered promiscuous mode
Feb  2 12:59:23 np0005605476 nova_compute[239846]: 2026-02-02 17:59:23.102 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:23 np0005605476 NetworkManager[49022]: <info>  [1770055163.1089] manager: (tapbad2c851-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.110 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbad2c851-10, col_values=(('external_ids', {'iface-id': 'ad9a646b-a8d9-417d-9b26-cd7734bca07f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:23 np0005605476 nova_compute[239846]: 2026-02-02 17:59:23.109 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:23 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:23Z|00253|binding|INFO|Releasing lport ad9a646b-a8d9-417d-9b26-cd7734bca07f from this chassis (sb_readonly=0)
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.113 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 12:59:23 np0005605476 nova_compute[239846]: 2026-02-02 17:59:23.113 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.114 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6d0512-7a08-45cf-abf0-4a609012ea30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.115 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/bad2c851-1c12-4a83-9873-6096fe5f4eec.pid.haproxy
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID bad2c851-1c12-4a83-9873-6096fe5f4eec
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 12:59:23 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:23.117 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'env', 'PROCESS_TAG=haproxy-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bad2c851-1c12-4a83-9873-6096fe5f4eec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 12:59:23 np0005605476 nova_compute[239846]: 2026-02-02 17:59:23.120 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:23 np0005605476 podman[270181]: 2026-02-02 17:59:23.396406394 +0000 UTC m=+0.036264922 container create 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb  2 12:59:23 np0005605476 systemd[1]: Started libpod-conmon-3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58.scope.
Feb  2 12:59:23 np0005605476 systemd[1]: Started libcrun container.
Feb  2 12:59:23 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d8a180460e081e8976b140992c18a16ddc02f3349ceceefedfbd6758d0e12e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 12:59:23 np0005605476 podman[270181]: 2026-02-02 17:59:23.454212022 +0000 UTC m=+0.094070550 container init 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 12:59:23 np0005605476 podman[270181]: 2026-02-02 17:59:23.458309438 +0000 UTC m=+0.098167966 container start 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 12:59:23 np0005605476 podman[270181]: 2026-02-02 17:59:23.376543025 +0000 UTC m=+0.016401573 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 12:59:23 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [NOTICE]   (270198) : New worker (270200) forked
Feb  2 12:59:23 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [NOTICE]   (270198) : Loading success.
Feb  2 12:59:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.038 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.038 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.039 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.039 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.039 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] No waiting events found dispatching network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.039 239853 WARNING nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received unexpected event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.039 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.040 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.040 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.040 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.040 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Processing event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.040 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.041 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.041 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.041 239853 DEBUG oslo_concurrency.lockutils [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.041 239853 DEBUG nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] No waiting events found dispatching network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.041 239853 WARNING nova.compute.manager [req-0310b7c9-f703-46e5-a0e5-5e327e38b87f req-789dad4d-4fff-42a9-a4b4-5b00c007729a e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received unexpected event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc for instance with vm_state building and task_state spawning.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.069 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.0691142, ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.069 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] VM Started (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.073 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.076 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.082 239853 INFO nova.virt.libvirt.driver [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Instance spawned successfully.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.082 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.088 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.093 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.104 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.105 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.106 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.107 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.108 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.109 239853 DEBUG nova.virt.libvirt.driver [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.117 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.118 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.069245, ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.118 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.138 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.142 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.145 239853 INFO nova.virt.libvirt.driver [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Instance spawned successfully.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.146 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.160 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.164 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.075624, ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.164 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.174 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.175 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.175 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.175 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.176 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.176 239853 DEBUG nova.virt.libvirt.driver [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.198 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.201 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.216 239853 INFO nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Took 8.24 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.216 239853 DEBUG nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.246 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.246 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.137701, 12f9d4e5-d748-4c22-946c-6e2ff0470f3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.246 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] VM Started (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.270 239853 INFO nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.271 239853 DEBUG nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.279 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.282 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.296 239853 INFO nova.compute.manager [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Took 10.69 seconds to build instance.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.319 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.320 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.137794, 12f9d4e5-d748-4c22-946c-6e2ff0470f3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.320 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] VM Paused (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.343 239853 DEBUG oslo_concurrency.lockutils [None req-f84c148a-c998-40e9-8db1-2ae22abb03e6 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.344 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.346 239853 INFO nova.compute.manager [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Took 10.31 seconds to build instance.#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.348 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055165.141192, 12f9d4e5-d748-4c22-946c-6e2ff0470f3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.348 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] VM Resumed (Lifecycle Event)#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.365 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.370 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.371 239853 DEBUG oslo_concurrency.lockutils [None req-e46b2f12-85f1-4568-bffc-7c470446f562 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:25 np0005605476 nova_compute[239846]: 2026-02-02 17:59:25.373 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 12:59:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 30 KiB/s wr, 34 op/s
Feb  2 12:59:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:26 np0005605476 nova_compute[239846]: 2026-02-02 17:59:26.571 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:27 np0005605476 podman[270221]: 2026-02-02 17:59:27.628922439 +0000 UTC m=+0.073281745 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 12:59:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 30 KiB/s wr, 34 op/s
Feb  2 12:59:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Feb  2 12:59:28 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Feb  2 12:59:28 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Feb  2 12:59:28 np0005605476 nova_compute[239846]: 2026-02-02 17:59:28.759 239853 DEBUG nova.compute.manager [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-changed-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:28 np0005605476 nova_compute[239846]: 2026-02-02 17:59:28.759 239853 DEBUG nova.compute.manager [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Refreshing instance network info cache due to event network-changed-8d0898bf-146f-4e2a-a034-a03af27ec188. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:59:28 np0005605476 nova_compute[239846]: 2026-02-02 17:59:28.760 239853 DEBUG oslo_concurrency.lockutils [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:28 np0005605476 nova_compute[239846]: 2026-02-02 17:59:28.760 239853 DEBUG oslo_concurrency.lockutils [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:28 np0005605476 nova_compute[239846]: 2026-02-02 17:59:28.760 239853 DEBUG nova.network.neutron [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Refreshing network info cache for port 8d0898bf-146f-4e2a-a034-a03af27ec188 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:59:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 130 op/s
Feb  2 12:59:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Feb  2 12:59:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Feb  2 12:59:30 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.367 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.827 239853 DEBUG nova.compute.manager [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-changed-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.827 239853 DEBUG nova.compute.manager [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Refreshing instance network info cache due to event network-changed-28f46804-2246-4d92-95c9-bce2c6c02fcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.827 239853 DEBUG oslo_concurrency.lockutils [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.828 239853 DEBUG oslo_concurrency.lockutils [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:30 np0005605476 nova_compute[239846]: 2026-02-02 17:59:30.828 239853 DEBUG nova.network.neutron [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Refreshing network info cache for port 28f46804-2246-4d92-95c9-bce2c6c02fcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:59:31 np0005605476 nova_compute[239846]: 2026-02-02 17:59:31.397 239853 DEBUG nova.network.neutron [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updated VIF entry in instance network info cache for port 8d0898bf-146f-4e2a-a034-a03af27ec188. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:59:31 np0005605476 nova_compute[239846]: 2026-02-02 17:59:31.398 239853 DEBUG nova.network.neutron [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updating instance_info_cache with network_info: [{"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:31 np0005605476 nova_compute[239846]: 2026-02-02 17:59:31.415 239853 DEBUG oslo_concurrency.lockutils [req-49510dc7-4f89-469f-beb8-2642c1d95e8b req-4663e510-4bd6-4d3a-80e8-9dc3304601aa e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-12f9d4e5-d748-4c22-946c-6e2ff0470f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390687210' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:31 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:31 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390687210' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:31 np0005605476 nova_compute[239846]: 2026-02-02 17:59:31.574 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:31 np0005605476 podman[270240]: 2026-02-02 17:59:31.621707744 +0000 UTC m=+0.070147557 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 12:59:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 39 KiB/s wr, 250 op/s
Feb  2 12:59:32 np0005605476 nova_compute[239846]: 2026-02-02 17:59:32.505 239853 DEBUG nova.network.neutron [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updated VIF entry in instance network info cache for port 28f46804-2246-4d92-95c9-bce2c6c02fcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:59:32 np0005605476 nova_compute[239846]: 2026-02-02 17:59:32.506 239853 DEBUG nova.network.neutron [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating instance_info_cache with network_info: [{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:32 np0005605476 nova_compute[239846]: 2026-02-02 17:59:32.534 239853 DEBUG oslo_concurrency.lockutils [req-2d822c36-a8bb-42cc-9835-6d6665fc964c req-9645f781-7dc4-4269-ac36-7c29ae47f3d8 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 385 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.6 KiB/s wr, 206 op/s
Feb  2 12:59:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Feb  2 12:59:34 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Feb  2 12:59:34 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Feb  2 12:59:35 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 12:59:35 np0005605476 nova_compute[239846]: 2026-02-02 17:59:35.405 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 1.6 MiB/s wr, 305 op/s
Feb  2 12:59:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:35 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:35Z|00053|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.8
Feb  2 12:59:35 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:35Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:09:b1:6a 10.100.0.8
Feb  2 12:59:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Feb  2 12:59:36 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Feb  2 12:59:36 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Feb  2 12:59:36 np0005605476 nova_compute[239846]: 2026-02-02 17:59:36.577 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:36 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:36Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:62:c9 10.100.0.12
Feb  2 12:59:36 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:36Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:62:c9 10.100.0.12
Feb  2 12:59:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_17:59:36
Feb  2 12:59:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 12:59:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 12:59:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes']
Feb  2 12:59:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.6 MiB/s wr, 198 op/s
Feb  2 12:59:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676026719' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:37 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:37 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676026719' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:59:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 12:59:38 np0005605476 nova_compute[239846]: 2026-02-02 17:59:38.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:39Z|00057|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.8
Feb  2 12:59:39 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:39Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:09:b1:6a 10.100.0.8
Feb  2 12:59:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 442 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 8.3 MiB/s wr, 206 op/s
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Feb  2 12:59:40 np0005605476 nova_compute[239846]: 2026-02-02 17:59:40.407 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Feb  2 12:59:40 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Feb  2 12:59:40 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:40Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:b1:6a 10.100.0.8
Feb  2 12:59:40 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:40Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:b1:6a 10.100.0.8
Feb  2 12:59:41 np0005605476 nova_compute[239846]: 2026-02-02 17:59:41.579 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 466 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 MiB/s wr, 234 op/s
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Feb  2 12:59:42 np0005605476 nova_compute[239846]: 2026-02-02 17:59:42.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:42 np0005605476 nova_compute[239846]: 2026-02-02 17:59:42.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135407203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:42 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135407203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:43 np0005605476 nova_compute[239846]: 2026-02-02 17:59:43.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 466 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 MiB/s wr, 234 op/s
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.282 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.282 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.283 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.283 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.283 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267437734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.880 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.962 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.962 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.965 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:59:44 np0005605476 nova_compute[239846]: 2026-02-02 17:59:44.965 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.107 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.108 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3867MB free_disk=59.9874726459384GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.108 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.108 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.154 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.155 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.155 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.156 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.156 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.157 239853 INFO nova.compute.manager [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Terminating instance#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.159 239853 DEBUG nova.compute.manager [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.370 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.371 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 12f9d4e5-d748-4c22-946c-6e2ff0470f3e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.371 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.371 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.409 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.418 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:45 np0005605476 kernel: tap8d0898bf-14 (unregistering): left promiscuous mode
Feb  2 12:59:45 np0005605476 NetworkManager[49022]: <info>  [1770055185.5276] device (tap8d0898bf-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 12:59:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:45Z|00254|binding|INFO|Releasing lport 8d0898bf-146f-4e2a-a034-a03af27ec188 from this chassis (sb_readonly=0)
Feb  2 12:59:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:45Z|00255|binding|INFO|Setting lport 8d0898bf-146f-4e2a-a034-a03af27ec188 down in Southbound
Feb  2 12:59:45 np0005605476 ovn_controller[146041]: 2026-02-02T17:59:45Z|00256|binding|INFO|Removing iface tap8d0898bf-14 ovn-installed in OVS
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.540 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.549 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb  2 12:59:45 np0005605476 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001b.scope: Consumed 14.284s CPU time.
Feb  2 12:59:45 np0005605476 systemd-machined[208080]: Machine qemu-26-instance-0000001b terminated.
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.611 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:62:c9 10.100.0.12'], port_security=['fa:16:3e:ca:62:c9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12f9d4e5-d748-4c22-946c-6e2ff0470f3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2cd3f756-a435-48cd-8232-7783559a028a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=8d0898bf-146f-4e2a-a034-a03af27ec188) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.612 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 8d0898bf-146f-4e2a-a034-a03af27ec188 in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 unbound from our chassis#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.614 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82a7f311-fed2-4a09-8203-270dceb25c76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.615 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[34b86650-ec04-4bfc-a6d3-eb18e83f3553]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.616 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace which is not needed anymore#033[00m
Feb  2 12:59:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 470 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 128 op/s
Feb  2 12:59:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:45 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [NOTICE]   (270126) : haproxy version is 2.8.14-c23fe91
Feb  2 12:59:45 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [NOTICE]   (270126) : path to executable is /usr/sbin/haproxy
Feb  2 12:59:45 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [WARNING]  (270126) : Exiting Master process...
Feb  2 12:59:45 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [ALERT]    (270126) : Current worker (270128) exited with code 143 (Terminated)
Feb  2 12:59:45 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270093]: [WARNING]  (270126) : All workers exited. Exiting... (0)
Feb  2 12:59:45 np0005605476 systemd[1]: libpod-10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564.scope: Deactivated successfully.
Feb  2 12:59:45 np0005605476 podman[270334]: 2026-02-02 17:59:45.737394763 +0000 UTC m=+0.041543211 container died 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 12:59:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564-userdata-shm.mount: Deactivated successfully.
Feb  2 12:59:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7e8e40b599e2ad1f4caf701552f3e6d495ecabc4ed44ed5467b4c5e57d1c97ec-merged.mount: Deactivated successfully.
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.773 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 podman[270334]: 2026-02-02 17:59:45.775885358 +0000 UTC m=+0.080033806 container cleanup 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.777 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 systemd[1]: libpod-conmon-10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564.scope: Deactivated successfully.
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.788 239853 INFO nova.virt.libvirt.driver [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Instance destroyed successfully.#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.789 239853 DEBUG nova.objects.instance [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'resources' on Instance uuid 12f9d4e5-d748-4c22-946c-6e2ff0470f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.809 239853 DEBUG nova.virt.libvirt.vif [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-338830185',display_name='tempest-TransferEncryptedVolumeTest-server-338830185',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-338830185',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-dj9hoeaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:59:25Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=12f9d4e5-d748-4c22-946c-6e2ff0470f3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.810 239853 DEBUG nova.network.os_vif_util [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "8d0898bf-146f-4e2a-a034-a03af27ec188", "address": "fa:16:3e:ca:62:c9", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d0898bf-14", "ovs_interfaceid": "8d0898bf-146f-4e2a-a034-a03af27ec188", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.811 239853 DEBUG nova.network.os_vif_util [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.811 239853 DEBUG os_vif [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.812 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.813 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d0898bf-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.816 239853 DEBUG nova.compute.manager [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-unplugged-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.817 239853 DEBUG oslo_concurrency.lockutils [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.817 239853 DEBUG oslo_concurrency.lockutils [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.817 239853 DEBUG oslo_concurrency.lockutils [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.817 239853 DEBUG nova.compute.manager [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] No waiting events found dispatching network-vif-unplugged-8d0898bf-146f-4e2a-a034-a03af27ec188 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.818 239853 DEBUG nova.compute.manager [req-a3191a80-428b-483d-a718-b384464e67bc req-368b7b49-02cc-44af-b4db-58e69718e263 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-unplugged-8d0898bf-146f-4e2a-a034-a03af27ec188 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.818 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.819 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.824 239853 INFO os_vif [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:62:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d0898bf-146f-4e2a-a034-a03af27ec188,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d0898bf-14')#033[00m
Feb  2 12:59:45 np0005605476 podman[270372]: 2026-02-02 17:59:45.846729573 +0000 UTC m=+0.044757122 container remove 10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.851 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2756c6ad-9bc8-41bc-83f0-e7d7aaf2d824]: (4, ('Mon Feb  2 05:59:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564)\n10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564\nMon Feb  2 05:59:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564)\n10867a702e8abb3a8179e0e522b032b209c10df71ff6ff8bbc395d280c737564\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.853 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[952de608-2c9a-4f5e-83f2-cdb5145d3b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.854 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.856 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 kernel: tap82a7f311-f0: left promiscuous mode
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.863 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.864 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.866 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[54484d83-37a9-40b7-b88f-2e0a7150f067]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.885 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1865f7-c3c2-4ac9-8998-7a26fe99c3f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.886 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d36bee7d-19c0-4cfc-b4df-9e3f95e5cd09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.898 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b32f0050-af84-4823-af62-f42393fd2455]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440355, 'reachable_time': 32367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270406, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 systemd[1]: run-netns-ovnmeta\x2d82a7f311\x2dfed2\x2d4a09\x2d8203\x2d270dceb25c76.mount: Deactivated successfully.
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.907 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 12:59:45 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:45.907 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[2dad71b4-12ad-4cf2-a297-70c1a1aeea74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.981 239853 INFO nova.virt.libvirt.driver [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Deleting instance files /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e_del#033[00m
Feb  2 12:59:45 np0005605476 nova_compute[239846]: 2026-02-02 17:59:45.982 239853 INFO nova.virt.libvirt.driver [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Deletion of /var/lib/nova/instances/12f9d4e5-d748-4c22-946c-6e2ff0470f3e_del complete#033[00m
Feb  2 12:59:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194786264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.007 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.012 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.031 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.039 239853 INFO nova.compute.manager [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.039 239853 DEBUG oslo.service.loopingcall [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.039 239853 DEBUG nova.compute.manager [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.040 239853 DEBUG nova.network.neutron [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.052 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.052 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Feb  2 12:59:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Feb  2 12:59:46 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.638 239853 DEBUG nova.network.neutron [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:46.651 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:46.652 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 17:59:46.652 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.656 239853 INFO nova.compute.manager [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Took 0.62 seconds to deallocate network for instance.#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.849 239853 INFO nova.compute.manager [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.898 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.899 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:46 np0005605476 nova_compute[239846]: 2026-02-02 17:59:46.954 239853 DEBUG oslo_concurrency.processutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.053 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.054 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.054 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.210 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.211 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.211 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.211 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 12:59:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Feb  2 12:59:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Feb  2 12:59:47 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Feb  2 12:59:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155313376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.494 239853 DEBUG oslo_concurrency.processutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.498 239853 DEBUG nova.compute.provider_tree [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.512 239853 DEBUG nova.scheduler.client.report [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.532 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.558 239853 INFO nova.scheduler.client.report [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Deleted allocations for instance 12f9d4e5-d748-4c22-946c-6e2ff0470f3e#033[00m
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.351493499610793e-05 of space, bias 1.0, pg target 0.004054480498832379 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005717847188438003 of space, bias 1.0, pg target 1.7153541565314008 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2991394293357163e-06 of space, bias 1.0, pg target 0.0006874426893713792 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319841658502 of space, bias 1.0, pg target 0.19926316326558918 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.707518990728984e-07 of space, bias 4.0, pg target 0.0011610192712911865 quantized to 16 (current 16)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.618 239853 DEBUG oslo_concurrency.lockutils [None req-43b9ceb4-b83d-47cf-ade5-19281ee43a8c a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 470 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 722 KiB/s rd, 725 KiB/s wr, 57 op/s
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.887 239853 DEBUG nova.compute.manager [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.888 239853 DEBUG oslo_concurrency.lockutils [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.888 239853 DEBUG oslo_concurrency.lockutils [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.888 239853 DEBUG oslo_concurrency.lockutils [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "12f9d4e5-d748-4c22-946c-6e2ff0470f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.889 239853 DEBUG nova.compute.manager [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] No waiting events found dispatching network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.889 239853 WARNING nova.compute.manager [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received unexpected event network-vif-plugged-8d0898bf-146f-4e2a-a034-a03af27ec188 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 12:59:47 np0005605476 nova_compute[239846]: 2026-02-02 17:59:47.889 239853 DEBUG nova.compute.manager [req-1b354eae-a32c-497f-ba5b-a1e10f642095 req-fc966a65-19fd-46c7-b732-e5a5492a3fb2 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Received event network-vif-deleted-8d0898bf-146f-4e2a-a034-a03af27ec188 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:48 np0005605476 nova_compute[239846]: 2026-02-02 17:59:48.187 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating instance_info_cache with network_info: [{"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:48 np0005605476 nova_compute[239846]: 2026-02-02 17:59:48.201 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:48 np0005605476 nova_compute[239846]: 2026-02-02 17:59:48.201 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 12:59:48 np0005605476 nova_compute[239846]: 2026-02-02 17:59:48.202 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:48 np0005605476 nova_compute[239846]: 2026-02-02 17:59:48.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050001869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050001869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:49 np0005605476 nova_compute[239846]: 2026-02-02 17:59:49.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 469 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 600 KiB/s wr, 105 op/s
Feb  2 12:59:50 np0005605476 nova_compute[239846]: 2026-02-02 17:59:50.411 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Feb  2 12:59:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Feb  2 12:59:50 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Feb  2 12:59:50 np0005605476 nova_compute[239846]: 2026-02-02 17:59:50.815 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 469 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 26 KiB/s wr, 86 op/s
Feb  2 12:59:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Feb  2 12:59:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Feb  2 12:59:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Feb  2 12:59:52 np0005605476 nova_compute[239846]: 2026-02-02 17:59:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 12:59:52 np0005605476 nova_compute[239846]: 2026-02-02 17:59:52.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 12:59:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 469 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 25 KiB/s wr, 82 op/s
Feb  2 12:59:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Feb  2 12:59:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Feb  2 12:59:53 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:53.999 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.000 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.018 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.082 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.082 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.090 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.090 239853 INFO nova.compute.claims [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.211 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446961056' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446961056' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 12:59:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388772818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.745 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.750 239853 DEBUG nova.compute.provider_tree [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.766 239853 DEBUG nova.scheduler.client.report [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.786 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.787 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.829 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.829 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.848 239853 INFO nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.867 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 12:59:54 np0005605476 nova_compute[239846]: 2026-02-02 17:59:54.919 239853 INFO nova.virt.block_device [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Booting with volume 08640039-7618-4ae4-95c5-1f173b2afdda at /dev/vda#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.038 239853 DEBUG os_brick.utils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.040 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.049 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.050 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[393bf8f9-8089-417c-815a-aa7c6fc46488]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.051 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.056 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.056 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[434e2a2c-0352-44ef-8bfb-63a8c4a0d324]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.057 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.064 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.065 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6e7066-8105-4eed-bb79-1a50edb1506b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.066 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[c7981d8b-7df6-4821-9f14-5691c43b5a07]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.067 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.083 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.086 239853 DEBUG os_brick.initiator.connectors.lightos [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.086 239853 DEBUG os_brick.initiator.connectors.lightos [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.086 239853 DEBUG os_brick.initiator.connectors.lightos [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.086 239853 DEBUG os_brick.utils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.087 239853 DEBUG nova.virt.block_device [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updating existing volume attachment record: 7ff2423d-0818-41fc-823c-712059403c2b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.413 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.516 239853 DEBUG nova.policy [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3de5c2f3ec44d4684754f1707ba5236', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 12:59:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 908 KiB/s rd, 883 KiB/s wr, 78 op/s
Feb  2 12:59:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 12:59:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/635337103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:55 np0005605476 nova_compute[239846]: 2026-02-02 17:59:55.861 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.130 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.131 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.132 239853 INFO nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Creating image(s)#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.132 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.132 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Ensure instance console log exists: /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.133 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.133 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.133 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 12:59:56 np0005605476 nova_compute[239846]: 2026-02-02 17:59:56.502 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Successfully created port: 5321080d-38e7-4244-b22c-caa9bf7aa80c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 12:59:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 774 KiB/s rd, 752 KiB/s wr, 55 op/s
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.674 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Successfully updated port: 5321080d-38e7-4244-b22c-caa9bf7aa80c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.689 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.690 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquired lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.690 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.798 239853 DEBUG nova.compute.manager [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-changed-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.798 239853 DEBUG nova.compute.manager [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Refreshing instance network info cache due to event network-changed-5321080d-38e7-4244-b22c-caa9bf7aa80c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.798 239853 DEBUG oslo_concurrency.lockutils [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 12:59:57 np0005605476 nova_compute[239846]: 2026-02-02 17:59:57.845 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 12:59:58 np0005605476 podman[270461]: 2026-02-02 17:59:58.620941757 +0000 UTC m=+0.069033896 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.690 239853 DEBUG nova.network.neutron [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updating instance_info_cache with network_info: [{"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.714 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Releasing lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.715 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Instance network_info: |[{"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.715 239853 DEBUG oslo_concurrency.lockutils [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.715 239853 DEBUG nova.network.neutron [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Refreshing network info cache for port 5321080d-38e7-4244-b22c-caa9bf7aa80c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.718 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Start _get_guest_xml network_info=[{"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'delete_on_termination': False, 'disk_bus': 'virtio', 'attachment_id': '7ff2423d-0818-41fc-823c-712059403c2b', 'device_type': 'disk', 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '89c3837c-cf0a-4953-a4fb-2477c854795f', 'attached_at': '', 'detached_at': '', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'serial': '08640039-7618-4ae4-95c5-1f173b2afdda'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.723 239853 WARNING nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.727 239853 DEBUG nova.virt.libvirt.host [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.728 239853 DEBUG nova.virt.libvirt.host [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.731 239853 DEBUG nova.virt.libvirt.host [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.732 239853 DEBUG nova.virt.libvirt.host [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.733 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.733 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.733 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.733 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.734 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.734 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.734 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.734 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.735 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.735 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.735 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.735 239853 DEBUG nova.virt.hardware [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.756 239853 DEBUG nova.storage.rbd_utils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 12:59:58 np0005605476 nova_compute[239846]: 2026-02-02 17:59:58.759 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 12:59:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 12:59:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416900351' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.284 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.464 239853 DEBUG os_brick.encryptors [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'e5a08b26-ec86-41ed-aac4-1804a518e6da', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '89c3837c-cf0a-4953-a4fb-2477c854795f', 'attached_at': '', 'detached_at': '', 'volume_id': '08640039-7618-4ae4-95c5-1f173b2afdda', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.467 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.481 239853 DEBUG barbicanclient.v1.secrets [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.481 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.502 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.503 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.524 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.525 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.557 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.558 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 657 KiB/s wr, 51 op/s
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.708 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.708 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.731 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.732 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.760 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.761 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.786 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.787 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.807 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.807 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.828 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.829 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.862 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.863 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.887 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.887 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.896 239853 DEBUG nova.network.neutron [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updated VIF entry in instance network info cache for port 5321080d-38e7-4244-b22c-caa9bf7aa80c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.896 239853 DEBUG nova.network.neutron [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updating instance_info_cache with network_info: [{"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.910 239853 DEBUG oslo_concurrency.lockutils [req-63ae174d-4157-4a7e-a832-f627340fdb2d req-8ae00469-06b3-4bbd-83ae-bae3c2bfc06d e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.911 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.912 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.933 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.933 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.954 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.954 239853 INFO barbicanclient.base [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Calculated Secrets uuid ref: secrets/e5a08b26-ec86-41ed-aac4-1804a518e6da#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.971 239853 DEBUG barbicanclient.client [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.972 239853 DEBUG nova.virt.libvirt.host [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 12:59:59 np0005605476 nova_compute[239846]:  <usage type="volume">
Feb  2 12:59:59 np0005605476 nova_compute[239846]:    <volume>08640039-7618-4ae4-95c5-1f173b2afdda</volume>
Feb  2 12:59:59 np0005605476 nova_compute[239846]:  </usage>
Feb  2 12:59:59 np0005605476 nova_compute[239846]: </secret>
Feb  2 12:59:59 np0005605476 nova_compute[239846]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.995 239853 DEBUG nova.virt.libvirt.vif [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1794376160',display_name='tempest-TransferEncryptedVolumeTest-server-1794376160',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1794376160',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-hgasfw90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:54Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=89c3837c-cf0a-4953-a4fb-2477c854795f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.995 239853 DEBUG nova.network.os_vif_util [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.996 239853 DEBUG nova.network.os_vif_util [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 12:59:59 np0005605476 nova_compute[239846]: 2026-02-02 17:59:59.997 239853 DEBUG nova.objects.instance [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89c3837c-cf0a-4953-a4fb-2477c854795f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.011 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] End _get_guest_xml xml=<domain type="kvm">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <uuid>89c3837c-cf0a-4953-a4fb-2477c854795f</uuid>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <name>instance-0000001c</name>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1794376160</nova:name>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 17:59:58</nova:creationTime>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:user uuid="a3de5c2f3ec44d4684754f1707ba5236">tempest-TransferEncryptedVolumeTest-1386167090-project-member</nova:user>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:project uuid="224fb1fcaf0e4ffb9c3e3e7792ff25c6">tempest-TransferEncryptedVolumeTest-1386167090</nova:project>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <nova:port uuid="5321080d-38e7-4244-b22c-caa9bf7aa80c">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <system>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="serial">89c3837c-cf0a-4953-a4fb-2477c854795f</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="uuid">89c3837c-cf0a-4953-a4fb-2477c854795f</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </system>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <os>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </os>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <features>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </features>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </clock>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  <devices>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </source>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </auth>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </disk>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="volumes/volume-08640039-7618-4ae4-95c5-1f173b2afdda">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </source>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </auth>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <serial>08640039-7618-4ae4-95c5-1f173b2afdda</serial>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <encryption format="luks">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:        <secret type="passphrase" uuid="5633b09a-1c9d-46f4-849d-02ff2d1b0005"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      </encryption>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </disk>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:db:16:59"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <target dev="tap5321080d-38"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </interface>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/console.log" append="off"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </serial>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <video>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </video>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </rng>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 13:00:00 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 13:00:00 np0005605476 nova_compute[239846]:  </devices>
Feb  2 13:00:00 np0005605476 nova_compute[239846]: </domain>
Feb  2 13:00:00 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.011 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Preparing to wait for external event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.011 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.012 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.012 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.012 239853 DEBUG nova.virt.libvirt.vif [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T17:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1794376160',display_name='tempest-TransferEncryptedVolumeTest-server-1794376160',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1794376160',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-hgasfw90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T17:59:54Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=89c3837c-cf0a-4953-a4fb-2477c854795f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.013 239853 DEBUG nova.network.os_vif_util [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.013 239853 DEBUG nova.network.os_vif_util [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.013 239853 DEBUG os_vif [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.014 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.014 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.015 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.017 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.017 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5321080d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.017 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5321080d-38, col_values=(('external_ids', {'iface-id': '5321080d-38e7-4244-b22c-caa9bf7aa80c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:16:59', 'vm-uuid': '89c3837c-cf0a-4953-a4fb-2477c854795f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.0196] manager: (tap5321080d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.018 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.022 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.023 239853 INFO os_vif [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38')#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.062 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.062 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.063 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] No VIF found with MAC fa:16:3e:db:16:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.063 239853 INFO nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Using config drive#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.080 239853 DEBUG nova.storage.rbd_utils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.415 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.526 239853 INFO nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Creating config drive at /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.530 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmews38h5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.653 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmews38h5" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.682 239853 DEBUG nova.storage.rbd_utils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] rbd image 89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Feb  2 13:00:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.687 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config 89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.782 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055185.7817378, 12f9d4e5-d748-4c22-946c-6e2ff0470f3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.783 239853 INFO nova.compute.manager [-] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] VM Stopped (Lifecycle Event)#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.802 239853 DEBUG nova.compute.manager [None req-662290a7-68f0-49a9-97ff-25988e99fdf3 - - - - - -] [instance: 12f9d4e5-d748-4c22-946c-6e2ff0470f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.811 239853 DEBUG oslo_concurrency.processutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config 89c3837c-cf0a-4953-a4fb-2477c854795f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.812 239853 INFO nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Deleting local config drive /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f/disk.config because it was imported into RBD.#033[00m
Feb  2 13:00:00 np0005605476 kernel: tap5321080d-38: entered promiscuous mode
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.8485] manager: (tap5321080d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00257|binding|INFO|Claiming lport 5321080d-38e7-4244-b22c-caa9bf7aa80c for this chassis.
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00258|binding|INFO|5321080d-38e7-4244-b22c-caa9bf7aa80c: Claiming fa:16:3e:db:16:59 10.100.0.6
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.849 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.856 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00259|binding|INFO|Setting lport 5321080d-38e7-4244-b22c-caa9bf7aa80c ovn-installed in OVS
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.858 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00260|binding|INFO|Setting lport 5321080d-38e7-4244-b22c-caa9bf7aa80c up in Southbound
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.862 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:16:59 10.100.0.6'], port_security=['fa:16:3e:db:16:59 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '89c3837c-cf0a-4953-a4fb-2477c854795f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2cd3f756-a435-48cd-8232-7783559a028a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=5321080d-38e7-4244-b22c-caa9bf7aa80c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.864 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 5321080d-38e7-4244-b22c-caa9bf7aa80c in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 bound to our chassis#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.865 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82a7f311-fed2-4a09-8203-270dceb25c76#033[00m
Feb  2 13:00:00 np0005605476 systemd-udevd[270593]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.871 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea084e84-1ab2-4b60-95b8-20f0927738ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.872 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82a7f311-f1 in ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 13:00:00 np0005605476 systemd-machined[208080]: New machine qemu-28-instance-0000001c.
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.873 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82a7f311-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.873 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[0f02f3d9-e44b-4700-b1e2-7e03fdef80bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.874 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9195b7fc-bce2-440b-a37f-c8363fedd2ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.8803] device (tap5321080d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.8809] device (tap5321080d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.883 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[88c7fea4-1b17-492a-8dbb-ffe21f9b5ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.895 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[34f29bca-11aa-439d-a2ef-5285320c81a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.919 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8a3524-4089-4edc-aa0b-5f8fdc0db8e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.925 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[72732dc2-c26f-47c9-b419-c65cc33de855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.9264] manager: (tap82a7f311-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.937 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.937 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.938 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.938 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.938 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.940 239853 INFO nova.compute.manager [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Terminating instance#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.941 239853 DEBUG nova.compute.manager [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.948 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[49c992a3-582e-4fd6-8cae-104e87744c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.950 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[b48da1fb-27d5-416d-bd92-a13151072eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.9667] device (tap82a7f311-f0): carrier: link connected
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.970 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb23803-e2d8-4c9f-810e-fab21af6ee12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 kernel: tap28f46804-22 (unregistering): left promiscuous mode
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.983 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[54390028-2125-45f7-bb56-d345588a6c7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444229, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270626, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:00 np0005605476 NetworkManager[49022]: <info>  [1770055200.9863] device (tap28f46804-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.993 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 nova_compute[239846]: 2026-02-02 18:00:00.995 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00261|binding|INFO|Releasing lport 28f46804-2246-4d92-95c9-bce2c6c02fcc from this chassis (sb_readonly=0)
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00262|binding|INFO|Setting lport 28f46804-2246-4d92-95c9-bce2c6c02fcc down in Southbound
Feb  2 13:00:00 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:00Z|00263|binding|INFO|Removing iface tap28f46804-22 ovn-installed in OVS
Feb  2 13:00:00 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:00.996 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[58d23e4f-3868-4268-85a8-a35ea4ca9e4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:34d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444229, 'tstamp': 444229}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270628, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.004 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.007 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b1:6a 10.100.0.8'], port_security=['fa:16:3e:09:b1:6a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1ccd20d4c994d098fc29da09fe94797', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5c671fc8-95a7-4695-88ca-6053121c3610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd8473dd-56bb-4af5-90b0-f8395d5df17e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=28f46804-2246-4d92-95c9-bce2c6c02fcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.009 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c48df70-7608-4975-9f43-a5c70cc2b72a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a7f311-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:34:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444229, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270631, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb  2 13:00:01 np0005605476 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001a.scope: Consumed 14.860s CPU time.
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.040 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecd125b-5c8f-4b43-be36-98d9ce98e11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 systemd-machined[208080]: Machine qemu-27-instance-0000001a terminated.
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.092 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[60f3019b-2bab-4a5c-a22e-13da458f96ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.093 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.093 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.094 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82a7f311-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.095 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 NetworkManager[49022]: <info>  [1770055201.0964] manager: (tap82a7f311-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Feb  2 13:00:01 np0005605476 kernel: tap82a7f311-f0: entered promiscuous mode
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.102 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.102 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82a7f311-f0, col_values=(('external_ids', {'iface-id': '51e5cd2d-8b15-4de8-985f-c87fe41124e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:01 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:01Z|00264|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.109 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.112 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.113 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.114 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9b115d8e-5024-4a3a-b8bd-500a1442e4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.114 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/82a7f311-fed2-4a09-8203-270dceb25c76.pid.haproxy
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 82a7f311-fed2-4a09-8203-270dceb25c76
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.115 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'env', 'PROCESS_TAG=haproxy-82a7f311-fed2-4a09-8203-270dceb25c76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82a7f311-fed2-4a09-8203-270dceb25c76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 13:00:01 np0005605476 NetworkManager[49022]: <info>  [1770055201.1620] manager: (tap28f46804-22): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.163 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.168 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.172 239853 DEBUG nova.compute.manager [req-806c4f4d-0cff-4efc-8917-410e450cbd45 req-e3dc53e6-c0c5-4235-a4c9-f4c391f5cfd7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.172 239853 DEBUG oslo_concurrency.lockutils [req-806c4f4d-0cff-4efc-8917-410e450cbd45 req-e3dc53e6-c0c5-4235-a4c9-f4c391f5cfd7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.173 239853 DEBUG oslo_concurrency.lockutils [req-806c4f4d-0cff-4efc-8917-410e450cbd45 req-e3dc53e6-c0c5-4235-a4c9-f4c391f5cfd7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.173 239853 DEBUG oslo_concurrency.lockutils [req-806c4f4d-0cff-4efc-8917-410e450cbd45 req-e3dc53e6-c0c5-4235-a4c9-f4c391f5cfd7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.173 239853 DEBUG nova.compute.manager [req-806c4f4d-0cff-4efc-8917-410e450cbd45 req-e3dc53e6-c0c5-4235-a4c9-f4c391f5cfd7 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Processing event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.181 239853 INFO nova.virt.libvirt.driver [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Instance destroyed successfully.#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.182 239853 DEBUG nova.objects.instance [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lazy-loading 'resources' on Instance uuid ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.200 239853 DEBUG nova.virt.libvirt.vif [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1288242109',display_name='tempest-TestEncryptedCinderVolumes-server-1288242109',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1288242109',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvZh/ElN8dmwg5kqdwORDsDMGtYV7W+gFnVOBSIjLYyV/rI6iEou7fmDWNrHI0Fxwj5cdNKTNIFMvPPLqPpnraTOvno/wTN57aN6pY1MzhxfV2DUooBXHiQdAUXSsyBmw==',key_name='tempest-TestEncryptedCinderVolumes-1730054458',keypairs=<?>,launch_index=0,launched_at=2026-02-02T17:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f1ccd20d4c994d098fc29da09fe94797',ramdisk_id='',reservation_id='r-uosun12z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1563506128',owner_user_name='tempest-TestEncryptedCinderVolumes-1563506128-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T17:59:25Z,user_data=None,user_id='c00d8fbb7f314affbdd560b88d4ce236',uuid=ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.200 239853 DEBUG nova.network.os_vif_util [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converting VIF {"id": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "address": "fa:16:3e:09:b1:6a", "network": {"id": "bad2c851-1c12-4a83-9873-6096fe5f4eec", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-106737852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f1ccd20d4c994d098fc29da09fe94797", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28f46804-22", "ovs_interfaceid": "28f46804-2246-4d92-95c9-bce2c6c02fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.201 239853 DEBUG nova.network.os_vif_util [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.201 239853 DEBUG os_vif [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.203 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.204 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f46804-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.206 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.208 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.209 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.214 239853 INFO os_vif [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b1:6a,bridge_name='br-int',has_traffic_filtering=True,id=28f46804-2246-4d92-95c9-bce2c6c02fcc,network=Network(bad2c851-1c12-4a83-9873-6096fe5f4eec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28f46804-22')#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.348 239853 INFO nova.virt.libvirt.driver [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Deleting instance files /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_del#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.349 239853 INFO nova.virt.libvirt.driver [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Deletion of /var/lib/nova/instances/ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae_del complete#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.403 239853 INFO nova.compute.manager [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.404 239853 DEBUG oslo.service.loopingcall [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.404 239853 DEBUG nova.compute.manager [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.404 239853 DEBUG nova.network.neutron [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 13:00:01 np0005605476 podman[270695]: 2026-02-02 18:00:01.477803524 +0000 UTC m=+0.048044954 container create 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 13:00:01 np0005605476 systemd[1]: Started libpod-conmon-9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8.scope.
Feb  2 13:00:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3370a65283fa4291b1ebbc073d8f940e418adc9bb46f88ce9cedae8194fbcae1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:01 np0005605476 podman[270695]: 2026-02-02 18:00:01.539357208 +0000 UTC m=+0.109598668 container init 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  2 13:00:01 np0005605476 podman[270695]: 2026-02-02 18:00:01.545399478 +0000 UTC m=+0.115640908 container start 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 13:00:01 np0005605476 podman[270695]: 2026-02-02 18:00:01.451477113 +0000 UTC m=+0.021718563 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [NOTICE]   (270751) : New worker (270753) forked
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [NOTICE]   (270751) : Loading success.
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.594 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 28f46804-2246-4d92-95c9-bce2c6c02fcc in datapath bad2c851-1c12-4a83-9873-6096fe5f4eec unbound from our chassis#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.596 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bad2c851-1c12-4a83-9873-6096fe5f4eec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.596 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cc023c0c-af13-4ab3-8f01-312fddd59e75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.597 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec namespace which is not needed anymore#033[00m
Feb  2 13:00:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 657 KiB/s wr, 51 op/s
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [NOTICE]   (270198) : haproxy version is 2.8.14-c23fe91
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [NOTICE]   (270198) : path to executable is /usr/sbin/haproxy
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [WARNING]  (270198) : Exiting Master process...
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [ALERT]    (270198) : Current worker (270200) exited with code 143 (Terminated)
Feb  2 13:00:01 np0005605476 neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec[270194]: [WARNING]  (270198) : All workers exited. Exiting... (0)
Feb  2 13:00:01 np0005605476 systemd[1]: libpod-3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58.scope: Deactivated successfully.
Feb  2 13:00:01 np0005605476 podman[270779]: 2026-02-02 18:00:01.693121829 +0000 UTC m=+0.038124685 container died 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 13:00:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58-userdata-shm.mount: Deactivated successfully.
Feb  2 13:00:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f4d8a180460e081e8976b140992c18a16ddc02f3349ceceefedfbd6758d0e12e-merged.mount: Deactivated successfully.
Feb  2 13:00:01 np0005605476 podman[270779]: 2026-02-02 18:00:01.72830826 +0000 UTC m=+0.073311096 container cleanup 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 13:00:01 np0005605476 systemd[1]: libpod-conmon-3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58.scope: Deactivated successfully.
Feb  2 13:00:01 np0005605476 podman[270794]: 2026-02-02 18:00:01.783980948 +0000 UTC m=+0.074838619 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Feb  2 13:00:01 np0005605476 podman[270817]: 2026-02-02 18:00:01.793031903 +0000 UTC m=+0.049268779 container remove 3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.797 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9c155a-dbd2-4b5f-8e22-a0dc8a1596f3]: (4, ('Mon Feb  2 06:00:01 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58)\n3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58\nMon Feb  2 06:00:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec (3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58)\n3d7152fa0749006e541ae455c164298ee0d64686d9a4f227bdcc2f152adccb58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.799 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9260b9-cd5d-41c3-b923-f5106f01cfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.800 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbad2c851-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.811 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 kernel: tapbad2c851-10: left promiscuous mode
Feb  2 13:00:01 np0005605476 nova_compute[239846]: 2026-02-02 18:00:01.818 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.820 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a31e4016-f83c-45de-944f-44f2d42922dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.841 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9e882e-978b-48d7-9149-50dfe43fcd2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.842 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[50c931d4-e2ac-4f86-9343-4e60c39aaa68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.855 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5e5e26-d4d7-40b0-9755-6928532f8bc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440424, 'reachable_time': 32967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270848, 'error': None, 'target': 'ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.857 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bad2c851-1c12-4a83-9873-6096fe5f4eec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 13:00:01 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:01.858 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[1afb4407-cfb6-4f95-ac8a-ecffd1f85ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:01 np0005605476 systemd[1]: run-netns-ovnmeta\x2dbad2c851\x2d1c12\x2d4a83\x2d9873\x2d6096fe5f4eec.mount: Deactivated successfully.
Feb  2 13:00:02 np0005605476 nova_compute[239846]: 2026-02-02 18:00:02.818 239853 DEBUG nova.network.neutron [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:02 np0005605476 nova_compute[239846]: 2026-02-02 18:00:02.831 239853 INFO nova.compute.manager [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Took 1.43 seconds to deallocate network for instance.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.153 239853 INFO nova.compute.manager [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Took 0.32 seconds to detach 1 volumes for instance.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.281 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.281 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.293 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.294 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.294 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.294 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.294 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] No waiting events found dispatching network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.295 239853 WARNING nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received unexpected event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c for instance with vm_state building and task_state spawning.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.295 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-vif-unplugged-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.295 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.295 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.296 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.296 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] No waiting events found dispatching network-vif-unplugged-28f46804-2246-4d92-95c9-bce2c6c02fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.296 239853 WARNING nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received unexpected event network-vif-unplugged-28f46804-2246-4d92-95c9-bce2c6c02fcc for instance with vm_state deleted and task_state None.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.296 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.297 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.297 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.297 239853 DEBUG oslo_concurrency.lockutils [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.297 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] No waiting events found dispatching network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.298 239853 WARNING nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received unexpected event network-vif-plugged-28f46804-2246-4d92-95c9-bce2c6c02fcc for instance with vm_state deleted and task_state None.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.298 239853 DEBUG nova.compute.manager [req-a2e692db-8961-4bb5-b818-35793c2e4c9c req-9a2bf10a-4dab-47f5-9a3e-695b5324990b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Received event network-vif-deleted-28f46804-2246-4d92-95c9-bce2c6c02fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.354 239853 DEBUG oslo_concurrency.processutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 527 KiB/s wr, 41 op/s
Feb  2 13:00:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:00:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341163951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.892 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.894 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055203.8922665, 89c3837c-cf0a-4953-a4fb-2477c854795f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.895 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] VM Started (Lifecycle Event)#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.898 239853 DEBUG oslo_concurrency.processutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.898 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.903 239853 DEBUG nova.compute.provider_tree [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.906 239853 INFO nova.virt.libvirt.driver [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Instance spawned successfully.#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.906 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.984 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:03 np0005605476 nova_compute[239846]: 2026-02-02 18:00:03.989 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.066 239853 DEBUG nova.scheduler.client.report [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.080 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.080 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.080 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.082 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.082 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.083 239853 DEBUG nova.virt.libvirt.driver [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.236 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.238 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055203.8936143, 89c3837c-cf0a-4953-a4fb-2477c854795f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.238 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] VM Paused (Lifecycle Event)#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.438 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.442 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055203.8978863, 89c3837c-cf0a-4953-a4fb-2477c854795f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.442 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] VM Resumed (Lifecycle Event)#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.450 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.600 239853 INFO nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Took 8.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.601 239853 DEBUG nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.611 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.615 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.723 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 13:00:04 np0005605476 nova_compute[239846]: 2026-02-02 18:00:04.771 239853 INFO nova.compute.manager [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Took 10.71 seconds to build instance.#033[00m
Feb  2 13:00:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:00:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/379398345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:00:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:00:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/379398345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:00:05 np0005605476 nova_compute[239846]: 2026-02-02 18:00:05.109 239853 DEBUG oslo_concurrency.lockutils [None req-eb6f4f35-4314-4bc8-b50d-49b0736942dc a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:05 np0005605476 nova_compute[239846]: 2026-02-02 18:00:05.163 239853 INFO nova.scheduler.client.report [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Deleted allocations for instance ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae#033[00m
Feb  2 13:00:05 np0005605476 nova_compute[239846]: 2026-02-02 18:00:05.233 239853 DEBUG oslo_concurrency.lockutils [None req-2528cfd5-afbb-499c-b4f3-f3b216d228e5 c00d8fbb7f314affbdd560b88d4ce236 f1ccd20d4c994d098fc29da09fe94797 - - default default] Lock "ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:05 np0005605476 nova_compute[239846]: 2026-02-02 18:00:05.418 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 61 op/s
Feb  2 13:00:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:06 np0005605476 nova_compute[239846]: 2026-02-02 18:00:06.207 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:00:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805476472' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:00:06 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:00:06 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805476472' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.633818027 +0000 UTC m=+0.086561789 container create 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.565732669 +0000 UTC m=+0.018476451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 477 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 61 op/s
Feb  2 13:00:07 np0005605476 systemd[1]: Started libpod-conmon-115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921.scope.
Feb  2 13:00:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.743081495 +0000 UTC m=+0.195825287 container init 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.74930001 +0000 UTC m=+0.202043772 container start 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.752641404 +0000 UTC m=+0.205385186 container attach 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 13:00:07 np0005605476 epic_yonath[271034]: 167 167
Feb  2 13:00:07 np0005605476 systemd[1]: libpod-115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921.scope: Deactivated successfully.
Feb  2 13:00:07 np0005605476 conmon[271034]: conmon 115359585ac4564d708b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921.scope/container/memory.events
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.756332218 +0000 UTC m=+0.209075980 container died 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 13:00:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e6fff79364e2c7c5859bff202e396fd5aba61b798ee0237890f6146340a9a158-merged.mount: Deactivated successfully.
Feb  2 13:00:07 np0005605476 podman[271018]: 2026-02-02 18:00:07.79050432 +0000 UTC m=+0.243248082 container remove 115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:00:07 np0005605476 systemd[1]: libpod-conmon-115359585ac4564d708bff3d494d0b7d02e70e946aba2ad72c9ca06a98fa4921.scope: Deactivated successfully.
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:07 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:00:07 np0005605476 podman[271059]: 2026-02-02 18:00:07.938447127 +0000 UTC m=+0.062023718 container create 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:00:07 np0005605476 systemd[1]: Started libpod-conmon-63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269.scope.
Feb  2 13:00:07 np0005605476 podman[271059]: 2026-02-02 18:00:07.901413044 +0000 UTC m=+0.024989655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:08 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:08 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:08 np0005605476 podman[271059]: 2026-02-02 18:00:08.056947765 +0000 UTC m=+0.180524386 container init 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 13:00:08 np0005605476 podman[271059]: 2026-02-02 18:00:08.065972769 +0000 UTC m=+0.189549390 container start 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 13:00:08 np0005605476 podman[271059]: 2026-02-02 18:00:08.121872584 +0000 UTC m=+0.245449175 container attach 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:00:08 np0005605476 heuristic_proskuriakova[271076]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:00:08 np0005605476 heuristic_proskuriakova[271076]: --> All data devices are unavailable
Feb  2 13:00:08 np0005605476 systemd[1]: libpod-63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269.scope: Deactivated successfully.
Feb  2 13:00:08 np0005605476 podman[271059]: 2026-02-02 18:00:08.446448486 +0000 UTC m=+0.570025107 container died 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:00:08 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ec801be04ee5498aa23f37d41495b93ce31fe28c6cf8cd642795a33f71e6df2e-merged.mount: Deactivated successfully.
Feb  2 13:00:08 np0005605476 podman[271059]: 2026-02-02 18:00:08.609682484 +0000 UTC m=+0.733259085 container remove 63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 13:00:08 np0005605476 systemd[1]: libpod-conmon-63f0905ec3b26aceb6f02a3ab8ab38148aa05af85875202c3a01be180b03b269.scope: Deactivated successfully.
Feb  2 13:00:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:00:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988427306' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:00:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:00:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988427306' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.016837672 +0000 UTC m=+0.031844408 container create d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 13:00:09 np0005605476 systemd[1]: Started libpod-conmon-d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4.scope.
Feb  2 13:00:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.093685167 +0000 UTC m=+0.108691953 container init d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.099087229 +0000 UTC m=+0.114093965 container start d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.002659943 +0000 UTC m=+0.017666699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:09 np0005605476 recursing_booth[271187]: 167 167
Feb  2 13:00:09 np0005605476 systemd[1]: libpod-d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4.scope: Deactivated successfully.
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.103242906 +0000 UTC m=+0.118249652 container attach d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.103653248 +0000 UTC m=+0.118660014 container died d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:00:09 np0005605476 systemd[1]: var-lib-containers-storage-overlay-1da3948f434f70bd36328dc8da9a1badd0e57c9f008a315b5c6a48911413a2d6-merged.mount: Deactivated successfully.
Feb  2 13:00:09 np0005605476 podman[271171]: 2026-02-02 18:00:09.133111457 +0000 UTC m=+0.148118193 container remove d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:00:09 np0005605476 systemd[1]: libpod-conmon-d2dcb8e3a2db8b38fc9b5f24c84165687725e27838665c919ff3cb95dfbba4f4.scope: Deactivated successfully.
Feb  2 13:00:09 np0005605476 podman[271210]: 2026-02-02 18:00:09.250225726 +0000 UTC m=+0.036108868 container create ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 13:00:09 np0005605476 systemd[1]: Started libpod-conmon-ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac.scope.
Feb  2 13:00:09 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c8aa202a76a79096e1a00e0bf9cfa17d81ee6fab32181e3a2da1ade7cc3a3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c8aa202a76a79096e1a00e0bf9cfa17d81ee6fab32181e3a2da1ade7cc3a3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c8aa202a76a79096e1a00e0bf9cfa17d81ee6fab32181e3a2da1ade7cc3a3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:09 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c8aa202a76a79096e1a00e0bf9cfa17d81ee6fab32181e3a2da1ade7cc3a3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:09 np0005605476 podman[271210]: 2026-02-02 18:00:09.328707147 +0000 UTC m=+0.114590289 container init ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:00:09 np0005605476 podman[271210]: 2026-02-02 18:00:09.236958842 +0000 UTC m=+0.022842004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:09 np0005605476 podman[271210]: 2026-02-02 18:00:09.333478291 +0000 UTC m=+0.119361433 container start ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 13:00:09 np0005605476 podman[271210]: 2026-02-02 18:00:09.341013663 +0000 UTC m=+0.126896805 container attach ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]: {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    "0": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "devices": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "/dev/loop3"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            ],
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_name": "ceph_lv0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_size": "21470642176",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "name": "ceph_lv0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "tags": {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_name": "ceph",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.crush_device_class": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.encrypted": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.objectstore": "bluestore",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_id": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.vdo": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.with_tpm": "0"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            },
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "vg_name": "ceph_vg0"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        }
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    ],
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    "1": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "devices": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "/dev/loop4"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            ],
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_name": "ceph_lv1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_size": "21470642176",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "name": "ceph_lv1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "tags": {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_name": "ceph",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.crush_device_class": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.encrypted": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.objectstore": "bluestore",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_id": "1",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.vdo": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.with_tpm": "0"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            },
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "vg_name": "ceph_vg1"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        }
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    ],
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    "2": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "devices": [
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "/dev/loop5"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            ],
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_name": "ceph_lv2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_size": "21470642176",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "name": "ceph_lv2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "tags": {
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.cluster_name": "ceph",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.crush_device_class": "",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.encrypted": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.objectstore": "bluestore",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osd_id": "2",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.vdo": "0",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:                "ceph.with_tpm": "0"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            },
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "type": "block",
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:            "vg_name": "ceph_vg2"
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:        }
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]:    ]
Feb  2 13:00:09 np0005605476 trusting_sanderson[271226]: }
Feb  2 13:00:09 np0005605476 systemd[1]: libpod-ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac.scope: Deactivated successfully.
Feb  2 13:00:09 np0005605476 podman[271235]: 2026-02-02 18:00:09.652192668 +0000 UTC m=+0.025561431 container died ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:00:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 457 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 115 op/s
Feb  2 13:00:09 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b1c8aa202a76a79096e1a00e0bf9cfa17d81ee6fab32181e3a2da1ade7cc3a3f-merged.mount: Deactivated successfully.
Feb  2 13:00:10 np0005605476 podman[271235]: 2026-02-02 18:00:10.139260097 +0000 UTC m=+0.512628860 container remove ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:00:10 np0005605476 systemd[1]: libpod-conmon-ee791eb0708dcfecb32a331310a5c213f1bb2dada362a2167b83ff0803c1feac.scope: Deactivated successfully.
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.420 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.581576986 +0000 UTC m=+0.045357489 container create cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:00:10 np0005605476 systemd[1]: Started libpod-conmon-cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172.scope.
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.642 239853 DEBUG nova.compute.manager [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-changed-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.643 239853 DEBUG nova.compute.manager [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Refreshing instance network info cache due to event network-changed-5321080d-38e7-4244-b22c-caa9bf7aa80c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.643 239853 DEBUG oslo_concurrency.lockutils [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.644 239853 DEBUG oslo_concurrency.lockutils [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 13:00:10 np0005605476 nova_compute[239846]: 2026-02-02 18:00:10.644 239853 DEBUG nova.network.neutron [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Refreshing network info cache for port 5321080d-38e7-4244-b22c-caa9bf7aa80c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 13:00:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.661319582 +0000 UTC m=+0.125100105 container init cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.56610792 +0000 UTC m=+0.029888453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.667020713 +0000 UTC m=+0.130801226 container start cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.670338156 +0000 UTC m=+0.134118659 container attach cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:00:10 np0005605476 elastic_kilby[271328]: 167 167
Feb  2 13:00:10 np0005605476 systemd[1]: libpod-cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172.scope: Deactivated successfully.
Feb  2 13:00:10 np0005605476 conmon[271328]: conmon cb93754da3453c7c7902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172.scope/container/memory.events
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.674168744 +0000 UTC m=+0.137949247 container died cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:00:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:10 np0005605476 systemd[1]: var-lib-containers-storage-overlay-de66d5a115712727e9a3d11d926175c814a05589f1b33a1e235dc110e74a6894-merged.mount: Deactivated successfully.
Feb  2 13:00:10 np0005605476 podman[271311]: 2026-02-02 18:00:10.711920407 +0000 UTC m=+0.175700910 container remove cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:00:10 np0005605476 systemd[1]: libpod-conmon-cb93754da3453c7c7902cc0776c95e73447d1b05abb9c07ff9a7e08ae6bf2172.scope: Deactivated successfully.
Feb  2 13:00:10 np0005605476 podman[271352]: 2026-02-02 18:00:10.89485392 +0000 UTC m=+0.085507329 container create 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 13:00:10 np0005605476 podman[271352]: 2026-02-02 18:00:10.82988304 +0000 UTC m=+0.020536479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:00:10 np0005605476 systemd[1]: Started libpod-conmon-4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23.scope.
Feb  2 13:00:10 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1deb0e6990c395fdd016e1e9da186650acd7d05a6b3b680f4e32f027c38237d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1deb0e6990c395fdd016e1e9da186650acd7d05a6b3b680f4e32f027c38237d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1deb0e6990c395fdd016e1e9da186650acd7d05a6b3b680f4e32f027c38237d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:11 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1deb0e6990c395fdd016e1e9da186650acd7d05a6b3b680f4e32f027c38237d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:11 np0005605476 podman[271352]: 2026-02-02 18:00:11.057613823 +0000 UTC m=+0.248267282 container init 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 13:00:11 np0005605476 podman[271352]: 2026-02-02 18:00:11.06636357 +0000 UTC m=+0.257016979 container start 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 13:00:11 np0005605476 podman[271352]: 2026-02-02 18:00:11.08020772 +0000 UTC m=+0.270861189 container attach 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 13:00:11 np0005605476 nova_compute[239846]: 2026-02-02 18:00:11.209 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:11 np0005605476 lvm[271447]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:00:11 np0005605476 lvm[271446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:00:11 np0005605476 lvm[271446]: VG ceph_vg0 finished
Feb  2 13:00:11 np0005605476 lvm[271447]: VG ceph_vg1 finished
Feb  2 13:00:11 np0005605476 lvm[271449]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:00:11 np0005605476 lvm[271449]: VG ceph_vg2 finished
Feb  2 13:00:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 130 op/s
Feb  2 13:00:11 np0005605476 trusting_noether[271368]: {}
Feb  2 13:00:11 np0005605476 systemd[1]: libpod-4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23.scope: Deactivated successfully.
Feb  2 13:00:11 np0005605476 podman[271352]: 2026-02-02 18:00:11.80873623 +0000 UTC m=+0.999389659 container died 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 13:00:11 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e1deb0e6990c395fdd016e1e9da186650acd7d05a6b3b680f4e32f027c38237d-merged.mount: Deactivated successfully.
Feb  2 13:00:11 np0005605476 podman[271352]: 2026-02-02 18:00:11.866694582 +0000 UTC m=+1.057347991 container remove 4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_noether, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:00:11 np0005605476 systemd[1]: libpod-conmon-4f3bc0e0cde034eb76f5fd15b45a5c382b0caeab59757330ea7470886e917f23.scope: Deactivated successfully.
Feb  2 13:00:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:00:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:11 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:00:11 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:12 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:00:13 np0005605476 nova_compute[239846]: 2026-02-02 18:00:13.032 239853 DEBUG nova.network.neutron [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updated VIF entry in instance network info cache for port 5321080d-38e7-4244-b22c-caa9bf7aa80c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 13:00:13 np0005605476 nova_compute[239846]: 2026-02-02 18:00:13.036 239853 DEBUG nova.network.neutron [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updating instance_info_cache with network_info: [{"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:13 np0005605476 nova_compute[239846]: 2026-02-02 18:00:13.077 239853 DEBUG oslo_concurrency.lockutils [req-0723b4bb-6cb5-4296-9ebc-fd216465ae80 req-5be5a514-b5ca-4184-81c3-858917d6a8ba e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-89c3837c-cf0a-4953-a4fb-2477c854795f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 13:00:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 119 op/s
Feb  2 13:00:14 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:14Z|00265|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 13:00:14 np0005605476 nova_compute[239846]: 2026-02-02 18:00:14.917 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:15 np0005605476 nova_compute[239846]: 2026-02-02 18:00:15.421 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 21 KiB/s wr, 154 op/s
Feb  2 13:00:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:16 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:16Z|00061|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Feb  2 13:00:16 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:16Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:db:16:59 10.100.0.6
Feb  2 13:00:16 np0005605476 nova_compute[239846]: 2026-02-02 18:00:16.179 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055201.1779366, ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:16 np0005605476 nova_compute[239846]: 2026-02-02 18:00:16.180 239853 INFO nova.compute.manager [-] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] VM Stopped (Lifecycle Event)#033[00m
Feb  2 13:00:16 np0005605476 nova_compute[239846]: 2026-02-02 18:00:16.213 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:16 np0005605476 nova_compute[239846]: 2026-02-02 18:00:16.244 239853 DEBUG nova.compute.manager [None req-9a56b2a2-1136-4c3c-b61b-1d1ec1a32139 - - - - - -] [instance: ddfc3fe5-dc36-4ca8-ab8a-523ed936d1ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.4 KiB/s wr, 105 op/s
Feb  2 13:00:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:17.795 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:00:17 np0005605476 nova_compute[239846]: 2026-02-02 18:00:17.795 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:17 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:17.796 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 13:00:18 np0005605476 nova_compute[239846]: 2026-02-02 18:00:18.706 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:00:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7341 writes, 33K keys, 7341 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7341 writes, 7341 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2057 writes, 9427 keys, 2057 commit groups, 1.0 writes per commit group, ingest: 12.19 MB, 0.02 MB/s#012Interval WAL: 2057 writes, 2057 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.5      0.48              0.09        17    0.028       0      0       0.0       0.0#012  L6      1/0   10.02 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    188.4    156.1      0.84              0.31        16    0.052     81K   9464       0.0       0.0#012 Sum      1/0   10.02 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5    120.0    127.6      1.32              0.39        33    0.040     81K   9464       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    168.6    173.3      0.28              0.12         8    0.035     26K   3113       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    188.4    156.1      0.84              0.31        16    0.052     81K   9464       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     77.9      0.48              0.09        16    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.07 MB/s read, 1.3 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 304.00 MB usage: 18.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00018 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1203,17.58 MB,5.78407%) FilterBlock(34,235.36 KB,0.0756063%) IndexBlock(34,462.39 KB,0.148537%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 13:00:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 114 op/s
Feb  2 13:00:20 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:20Z|00063|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Feb  2 13:00:20 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:20Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:db:16:59 10.100.0.6
Feb  2 13:00:20 np0005605476 nova_compute[239846]: 2026-02-02 18:00:20.421 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:20 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:20.798 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:20 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:20Z|00266|binding|INFO|Releasing lport 51e5cd2d-8b15-4de8-985f-c87fe41124e3 from this chassis (sb_readonly=0)
Feb  2 13:00:20 np0005605476 nova_compute[239846]: 2026-02-02 18:00:20.905 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:21 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:21Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:16:59 10.100.0.6
Feb  2 13:00:21 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:21Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:16:59 10.100.0.6
Feb  2 13:00:21 np0005605476 nova_compute[239846]: 2026-02-02 18:00:21.215 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 621 KiB/s rd, 12 KiB/s wr, 68 op/s
Feb  2 13:00:22 np0005605476 nova_compute[239846]: 2026-02-02 18:00:22.523 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 44 op/s
Feb  2 13:00:25 np0005605476 nova_compute[239846]: 2026-02-02 18:00:25.423 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 45 op/s
Feb  2 13:00:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.217 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.694 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.694 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.695 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.695 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.695 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.696 239853 INFO nova.compute.manager [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Terminating instance#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.697 239853 DEBUG nova.compute.manager [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 13:00:26 np0005605476 kernel: tap5321080d-38 (unregistering): left promiscuous mode
Feb  2 13:00:26 np0005605476 NetworkManager[49022]: <info>  [1770055226.7541] device (tap5321080d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.762 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:26Z|00267|binding|INFO|Releasing lport 5321080d-38e7-4244-b22c-caa9bf7aa80c from this chassis (sb_readonly=0)
Feb  2 13:00:26 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:26Z|00268|binding|INFO|Setting lport 5321080d-38e7-4244-b22c-caa9bf7aa80c down in Southbound
Feb  2 13:00:26 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:26Z|00269|binding|INFO|Removing iface tap5321080d-38 ovn-installed in OVS
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.767 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.775 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:26.774 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:16:59 10.100.0.6'], port_security=['fa:16:3e:db:16:59 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '89c3837c-cf0a-4953-a4fb-2477c854795f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a7f311-fed2-4a09-8203-270dceb25c76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '224fb1fcaf0e4ffb9c3e3e7792ff25c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2cd3f756-a435-48cd-8232-7783559a028a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb5056cf-4723-4f16-bde5-a512c125abd4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=5321080d-38e7-4244-b22c-caa9bf7aa80c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:00:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:26.777 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 5321080d-38e7-4244-b22c-caa9bf7aa80c in datapath 82a7f311-fed2-4a09-8203-270dceb25c76 unbound from our chassis#033[00m
Feb  2 13:00:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:26.778 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82a7f311-fed2-4a09-8203-270dceb25c76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 13:00:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:26.780 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[f4006de9-b971-49d6-ae52-f24d590d2586]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:26 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:26.781 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 namespace which is not needed anymore#033[00m
Feb  2 13:00:26 np0005605476 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb  2 13:00:26 np0005605476 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 14.389s CPU time.
Feb  2 13:00:26 np0005605476 systemd-machined[208080]: Machine qemu-28-instance-0000001c terminated.
Feb  2 13:00:26 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [NOTICE]   (270751) : haproxy version is 2.8.14-c23fe91
Feb  2 13:00:26 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [NOTICE]   (270751) : path to executable is /usr/sbin/haproxy
Feb  2 13:00:26 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [WARNING]  (270751) : Exiting Master process...
Feb  2 13:00:26 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [ALERT]    (270751) : Current worker (270753) exited with code 143 (Terminated)
Feb  2 13:00:26 np0005605476 neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76[270747]: [WARNING]  (270751) : All workers exited. Exiting... (0)
Feb  2 13:00:26 np0005605476 systemd[1]: libpod-9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8.scope: Deactivated successfully.
Feb  2 13:00:26 np0005605476 podman[271516]: 2026-02-02 18:00:26.927918818 +0000 UTC m=+0.054511526 container died 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.936 239853 INFO nova.virt.libvirt.driver [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Instance destroyed successfully.#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.936 239853 DEBUG nova.objects.instance [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lazy-loading 'resources' on Instance uuid 89c3837c-cf0a-4953-a4fb-2477c854795f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.950 239853 DEBUG nova.virt.libvirt.vif [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T17:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1794376160',display_name='tempest-TransferEncryptedVolumeTest-server-1794376160',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1794376160',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8emWf2dZHuLjJdK2r6+9lNaX1UyiGrMcjYDFGV1A4hVxbkjGTiV40O0lk0VMCYoJVKig0Oz32lve3+T+BxV8uFR6g7LwMcz9GOEB0HqgwX9cw1F0t8GaPWIvr9Eb06Iw==',key_name='tempest-TransferEncryptedVolumeTest-432157810',keypairs=<?>,launch_index=0,launched_at=2026-02-02T18:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='224fb1fcaf0e4ffb9c3e3e7792ff25c6',ramdisk_id='',reservation_id='r-hgasfw90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1386167090',owner_user_name='tempest-TransferEncryptedVolumeTest-1386167090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T18:00:04Z,user_data=None,user_id='a3de5c2f3ec44d4684754f1707ba5236',uuid=89c3837c-cf0a-4953-a4fb-2477c854795f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.951 239853 DEBUG nova.network.os_vif_util [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converting VIF {"id": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "address": "fa:16:3e:db:16:59", "network": {"id": "82a7f311-fed2-4a09-8203-270dceb25c76", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-211896023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "224fb1fcaf0e4ffb9c3e3e7792ff25c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5321080d-38", "ovs_interfaceid": "5321080d-38e7-4244-b22c-caa9bf7aa80c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.952 239853 DEBUG nova.network.os_vif_util [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.952 239853 DEBUG os_vif [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.956 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.957 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5321080d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.974 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.977 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:00:26 np0005605476 nova_compute[239846]: 2026-02-02 18:00:26.979 239853 INFO os_vif [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:16:59,bridge_name='br-int',has_traffic_filtering=True,id=5321080d-38e7-4244-b22c-caa9bf7aa80c,network=Network(82a7f311-fed2-4a09-8203-270dceb25c76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5321080d-38')#033[00m
Feb  2 13:00:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8-userdata-shm.mount: Deactivated successfully.
Feb  2 13:00:26 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3370a65283fa4291b1ebbc073d8f940e418adc9bb46f88ce9cedae8194fbcae1-merged.mount: Deactivated successfully.
Feb  2 13:00:26 np0005605476 podman[271516]: 2026-02-02 18:00:26.995015048 +0000 UTC m=+0.121607736 container cleanup 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:00:27 np0005605476 systemd[1]: libpod-conmon-9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8.scope: Deactivated successfully.
Feb  2 13:00:27 np0005605476 podman[271573]: 2026-02-02 18:00:27.054124573 +0000 UTC m=+0.040979225 container remove 9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.058 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf02820-d007-440e-88f6-a6c8712b5c70]: (4, ('Mon Feb  2 06:00:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8)\n9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8\nMon Feb  2 06:00:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 (9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8)\n9de86b0c86793bedd1e4879184ea7f50bdcebde57324cbba35cf4ea71311bbd8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.059 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb4f16c-92bd-4a7d-994f-77c82cad6b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.060 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a7f311-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.062 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:27 np0005605476 kernel: tap82a7f311-f0: left promiscuous mode
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.068 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.071 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c89161-c611-4200-acd1-6c1c0d6d5d76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.083 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[851ab3b4-b82b-4b91-9b1e-10bce0cf2284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.084 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b15a170b-b550-48f6-89fe-4f24cdca877f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.096 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[a93e291e-ab30-4a9e-acfa-20d2b2e445d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444224, 'reachable_time': 31053, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271592, 'error': None, 'target': 'ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 systemd[1]: run-netns-ovnmeta\x2d82a7f311\x2dfed2\x2d4a09\x2d8203\x2d270dceb25c76.mount: Deactivated successfully.
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.101 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82a7f311-fed2-4a09-8203-270dceb25c76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 13:00:27 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:27.101 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f12488-e045-45c8-b1ab-27ccdae86703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.115 239853 INFO nova.virt.libvirt.driver [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Deleting instance files /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f_del#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.116 239853 INFO nova.virt.libvirt.driver [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Deletion of /var/lib/nova/instances/89c3837c-cf0a-4953-a4fb-2477c854795f_del complete#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.170 239853 INFO nova.compute.manager [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.171 239853 DEBUG oslo.service.loopingcall [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.171 239853 DEBUG nova.compute.manager [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.171 239853 DEBUG nova.network.neutron [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.528 239853 DEBUG nova.compute.manager [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-unplugged-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.528 239853 DEBUG oslo_concurrency.lockutils [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.528 239853 DEBUG oslo_concurrency.lockutils [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.528 239853 DEBUG oslo_concurrency.lockutils [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.528 239853 DEBUG nova.compute.manager [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] No waiting events found dispatching network-vif-unplugged-5321080d-38e7-4244-b22c-caa9bf7aa80c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:27 np0005605476 nova_compute[239846]: 2026-02-02 18:00:27.529 239853 DEBUG nova.compute.manager [req-a4a6acbd-cea2-4c87-9e50-808cccd00f2f req-4614ee24-d246-469e-966a-e6ba73e763bb e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-unplugged-5321080d-38e7-4244-b22c-caa9bf7aa80c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 13:00:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 9 op/s
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.299 239853 DEBUG nova.network.neutron [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.319 239853 INFO nova.compute.manager [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Took 1.15 seconds to deallocate network for instance.#033[00m
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.566 239853 INFO nova.compute.manager [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.607 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.607 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:28 np0005605476 nova_compute[239846]: 2026-02-02 18:00:28.649 239853 DEBUG oslo_concurrency.processutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:29 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:00:29 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277650610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.177 239853 DEBUG oslo_concurrency.processutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.183 239853 DEBUG nova.compute.provider_tree [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.210 239853 DEBUG nova.scheduler.client.report [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.236 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.269 239853 INFO nova.scheduler.client.report [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Deleted allocations for instance 89c3837c-cf0a-4953-a4fb-2477c854795f#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.368 239853 DEBUG oslo_concurrency.lockutils [None req-4283f1e6-cf50-47da-85ad-58a172e4bb20 a3de5c2f3ec44d4684754f1707ba5236 224fb1fcaf0e4ffb9c3e3e7792ff25c6 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.617 239853 DEBUG nova.compute.manager [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.618 239853 DEBUG oslo_concurrency.lockutils [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.618 239853 DEBUG oslo_concurrency.lockutils [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.618 239853 DEBUG oslo_concurrency.lockutils [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "89c3837c-cf0a-4953-a4fb-2477c854795f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.618 239853 DEBUG nova.compute.manager [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] No waiting events found dispatching network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.618 239853 WARNING nova.compute.manager [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received unexpected event network-vif-plugged-5321080d-38e7-4244-b22c-caa9bf7aa80c for instance with vm_state deleted and task_state None.#033[00m
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.619 239853 DEBUG nova.compute.manager [req-30757429-3d40-4cbe-97c9-28e08f60cb10 req-b210a4ee-e80c-4cb0-a14c-c125a8cce9bd e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Received event network-vif-deleted-5321080d-38e7-4244-b22c-caa9bf7aa80c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:29 np0005605476 podman[271616]: 2026-02-02 18:00:29.63973687 +0000 UTC m=+0.088135674 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb  2 13:00:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 17 KiB/s wr, 15 op/s
Feb  2 13:00:29 np0005605476 nova_compute[239846]: 2026-02-02 18:00:29.737 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:30 np0005605476 nova_compute[239846]: 2026-02-02 18:00:30.425 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 18 KiB/s wr, 19 op/s
Feb  2 13:00:31 np0005605476 nova_compute[239846]: 2026-02-02 18:00:31.975 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:00:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783471414' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:00:32 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:00:32 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783471414' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:00:32 np0005605476 podman[271636]: 2026-02-02 18:00:32.650829121 +0000 UTC m=+0.097151117 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb  2 13:00:33 np0005605476 nova_compute[239846]: 2026-02-02 18:00:33.670 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 14 KiB/s wr, 18 op/s
Feb  2 13:00:35 np0005605476 nova_compute[239846]: 2026-02-02 18:00:35.428 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 14 KiB/s wr, 37 op/s
Feb  2 13:00:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:00:36
Feb  2 13:00:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:00:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:00:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', 'images', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root']
Feb  2 13:00:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:00:36 np0005605476 nova_compute[239846]: 2026-02-02 18:00:36.851 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:36 np0005605476 nova_compute[239846]: 2026-02-02 18:00:36.939 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:36 np0005605476 nova_compute[239846]: 2026-02-02 18:00:36.976 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 4.5 KiB/s wr, 37 op/s
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:00:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:00:39 np0005605476 nova_compute[239846]: 2026-02-02 18:00:39.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 4.5 KiB/s wr, 37 op/s
Feb  2 13:00:40 np0005605476 nova_compute[239846]: 2026-02-02 18:00:40.429 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 938 B/s wr, 31 op/s
Feb  2 13:00:41 np0005605476 nova_compute[239846]: 2026-02-02 18:00:41.934 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055226.932658, 89c3837c-cf0a-4953-a4fb-2477c854795f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:41 np0005605476 nova_compute[239846]: 2026-02-02 18:00:41.935 239853 INFO nova.compute.manager [-] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] VM Stopped (Lifecycle Event)#033[00m
Feb  2 13:00:41 np0005605476 nova_compute[239846]: 2026-02-02 18:00:41.955 239853 DEBUG nova.compute.manager [None req-8a14b629-3ecf-439e-8ca6-75f963cbcffa - - - - - -] [instance: 89c3837c-cf0a-4953-a4fb-2477c854795f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.015 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.605 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.606 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.623 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.712 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.712 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.722 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.722 239853 INFO nova.compute.claims [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.797 239853 DEBUG nova.scheduler.client.report [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.816 239853 DEBUG nova.scheduler.client.report [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.816 239853 DEBUG nova.compute.provider_tree [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.830 239853 DEBUG nova.scheduler.client.report [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.847 239853 DEBUG nova.scheduler.client.report [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 13:00:42 np0005605476 nova_compute[239846]: 2026-02-02 18:00:42.877 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:43 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:00:43 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095010707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.400 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.406 239853 DEBUG nova.compute.provider_tree [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.422 239853 DEBUG nova.scheduler.client.report [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.444 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.445 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.485 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.486 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.503 239853 INFO nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.524 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.622 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.623 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.623 239853 INFO nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Creating image(s)#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.643 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.667 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.690 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.694 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.743 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.744 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.745 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.745 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "1582dfd15e09c33ccb1810a3206c7cc36a7e5f68" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.770 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.774 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:43 np0005605476 nova_compute[239846]: 2026-02-02 18:00:43.803 239853 DEBUG nova.policy [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b9d3a264efbe443c860b536305fa7e8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '896604c79c574097a167451efa4ee5b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.044 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1582dfd15e09c33ccb1810a3206c7cc36a7e5f68 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.124 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] resizing rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.216 239853 DEBUG nova.objects.instance [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.229 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.230 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Ensure instance console log exists: /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.230 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.231 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.231 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.264 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.264 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.265 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.265 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.265 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:00:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228817045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.835 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Successfully created port: 75586f61-07ff-4cd0-9aa1-9845359a1fe6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 13:00:44 np0005605476 nova_compute[239846]: 2026-02-02 18:00:44.851 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.009 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.011 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4216MB free_disk=59.987772955559194GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.012 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.012 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.078 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.079 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.079 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.115 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.432 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:00:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504124852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.645 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.650 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.665 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.682 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.682 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 317 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Feb  2 13:00:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.746 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Successfully updated port: 75586f61-07ff-4cd0-9aa1-9845359a1fe6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.764 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.765 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquired lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.765 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.902 239853 DEBUG nova.compute.manager [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-changed-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.902 239853 DEBUG nova.compute.manager [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Refreshing instance network info cache due to event network-changed-75586f61-07ff-4cd0-9aa1-9845359a1fe6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.903 239853 DEBUG oslo_concurrency.lockutils [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 13:00:45 np0005605476 nova_compute[239846]: 2026-02-02 18:00:45.950 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 13:00:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:46.653 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:46.653 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:46.654 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.682 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.721 239853 DEBUG nova.network.neutron [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating instance_info_cache with network_info: [{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.742 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Releasing lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.742 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Instance network_info: |[{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.743 239853 DEBUG oslo_concurrency.lockutils [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.743 239853 DEBUG nova.network.neutron [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Refreshing network info cache for port 75586f61-07ff-4cd0-9aa1-9845359a1fe6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.746 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Start _get_guest_xml network_info=[{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'encryption_options': None, 'size': 0, 'image_id': '88ad7b87-724c-4a9f-a946-6c9736783609'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.750 239853 WARNING nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.754 239853 DEBUG nova.virt.libvirt.host [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.755 239853 DEBUG nova.virt.libvirt.host [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.762 239853 DEBUG nova.virt.libvirt.host [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.762 239853 DEBUG nova.virt.libvirt.host [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.763 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.763 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T17:42:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='19f1ae7b-ea95-44f8-906f-33dc7d64ca75',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T17:42:37Z,direct_url=<?>,disk_format='qcow2',id=88ad7b87-724c-4a9f-a946-6c9736783609,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='628bef10fb3a45d18abe453a0d66d537',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T17:42:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.764 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.764 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.764 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.764 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.764 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.765 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.765 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.765 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.765 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.765 239853 DEBUG nova.virt.hardware [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 13:00:46 np0005605476 nova_compute[239846]: 2026-02-02 18:00:46.768 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.019 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.270 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:00:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:00:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.324 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.345 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.350 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003544035744696551 of space, bias 1.0, pg target 0.10632107234089654 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029146197625123144 of space, bias 1.0, pg target 0.8743859287536944 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2978353230229903e-06 of space, bias 1.0, pg target 0.0006893505969068971 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319841658502 of space, bias 1.0, pg target 0.19992959524975504 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.677089843432047e-07 of space, bias 4.0, pg target 0.0011612507812118456 quantized to 16 (current 16)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:00:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 317 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Feb  2 13:00:47 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:00:47 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535595281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.889 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.891 239853 DEBUG nova.virt.libvirt.vif [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T18:00:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1917551112',display_name='tempest-SnapshotDataIntegrityTests-server-1917551112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1917551112',id=29,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcTSzO5/8KojM1MLUWkMK6qy2V5C4TV9O40HgYTNurKXbRFAxZyQQsb6UT9A+x9JmkPDulSDIxxh2hVKzYhHYd9VcbaUH4uFix/tlL5lTqqzCf4k5lqJSGlll+jKCctdw==',key_name='tempest-keypair-1269425205',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='896604c79c574097a167451efa4ee5b2',ramdisk_id='',reservation_id='r-pwwxyol5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1188948708',owner_user_name='tempest-SnapshotDataIntegrityTests-1188948708-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T18:00:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9d3a264efbe443c860b536305fa7e8a',uuid=3ba4448b-74c6-491d-bbbe-a1f5e2e9852e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.891 239853 DEBUG nova.network.os_vif_util [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converting VIF {"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.892 239853 DEBUG nova.network.os_vif_util [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:00:47 np0005605476 nova_compute[239846]: 2026-02-02 18:00:47.893 239853 DEBUG nova.objects.instance [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.028 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] End _get_guest_xml xml=<domain type="kvm">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <uuid>3ba4448b-74c6-491d-bbbe-a1f5e2e9852e</uuid>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <name>instance-0000001d</name>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <memory>131072</memory>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <vcpu>1</vcpu>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <metadata>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-1917551112</nova:name>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:creationTime>2026-02-02 18:00:46</nova:creationTime>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:flavor name="m1.nano">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:memory>128</nova:memory>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:disk>1</nova:disk>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:swap>0</nova:swap>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:vcpus>1</nova:vcpus>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </nova:flavor>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:owner>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:user uuid="b9d3a264efbe443c860b536305fa7e8a">tempest-SnapshotDataIntegrityTests-1188948708-project-member</nova:user>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:project uuid="896604c79c574097a167451efa4ee5b2">tempest-SnapshotDataIntegrityTests-1188948708</nova:project>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </nova:owner>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:root type="image" uuid="88ad7b87-724c-4a9f-a946-6c9736783609"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <nova:ports>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <nova:port uuid="75586f61-07ff-4cd0-9aa1-9845359a1fe6">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        </nova:port>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </nova:ports>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </nova:instance>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </metadata>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <sysinfo type="smbios">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <system>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="manufacturer">RDO</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="product">OpenStack Compute</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="serial">3ba4448b-74c6-491d-bbbe-a1f5e2e9852e</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="uuid">3ba4448b-74c6-491d-bbbe-a1f5e2e9852e</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <entry name="family">Virtual Machine</entry>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </system>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </sysinfo>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <os>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <boot dev="hd"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <smbios mode="sysinfo"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </os>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <features>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <acpi/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <apic/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <vmcoreinfo/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </features>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <clock offset="utc">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <timer name="hpet" present="no"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </clock>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <cpu mode="host-model" match="exact">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </cpu>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  <devices>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <disk type="network" device="disk">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </source>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </auth>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <target dev="vda" bus="virtio"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </disk>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <disk type="network" device="cdrom">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <driver type="raw" cache="none"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <source protocol="rbd" name="vms/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <host name="192.168.122.100" port="6789"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </source>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <auth username="openstack">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:        <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      </auth>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <target dev="sda" bus="sata"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </disk>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <interface type="ethernet">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <mac address="fa:16:3e:30:d8:b7"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <mtu size="1442"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <target dev="tap75586f61-07"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </interface>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <serial type="pty">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <log file="/var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/console.log" append="off"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </serial>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <video>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <model type="virtio"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </video>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <input type="tablet" bus="usb"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <rng model="virtio">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <backend model="random">/dev/urandom</backend>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </rng>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <controller type="usb" index="0"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    <memballoon model="virtio">
Feb  2 13:00:48 np0005605476 nova_compute[239846]:      <stats period="10"/>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:    </memballoon>
Feb  2 13:00:48 np0005605476 nova_compute[239846]:  </devices>
Feb  2 13:00:48 np0005605476 nova_compute[239846]: </domain>
Feb  2 13:00:48 np0005605476 nova_compute[239846]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.029 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Preparing to wait for external event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.029 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.029 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.030 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.030 239853 DEBUG nova.virt.libvirt.vif [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T18:00:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1917551112',display_name='tempest-SnapshotDataIntegrityTests-server-1917551112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1917551112',id=29,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcTSzO5/8KojM1MLUWkMK6qy2V5C4TV9O40HgYTNurKXbRFAxZyQQsb6UT9A+x9JmkPDulSDIxxh2hVKzYhHYd9VcbaUH4uFix/tlL5lTqqzCf4k5lqJSGlll+jKCctdw==',key_name='tempest-keypair-1269425205',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='896604c79c574097a167451efa4ee5b2',ramdisk_id='',reservation_id='r-pwwxyol5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1188948708',owner_user_name='tempest-SnapshotDataIntegrityTests-1188948708-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T18:00:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9d3a264efbe443c860b536305fa7e8a',uuid=3ba4448b-74c6-491d-bbbe-a1f5e2e9852e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.031 239853 DEBUG nova.network.os_vif_util [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converting VIF {"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.031 239853 DEBUG nova.network.os_vif_util [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.032 239853 DEBUG os_vif [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.032 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.033 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.033 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.036 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.036 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75586f61-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.036 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75586f61-07, col_values=(('external_ids', {'iface-id': '75586f61-07ff-4cd0-9aa1-9845359a1fe6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:d8:b7', 'vm-uuid': '3ba4448b-74c6-491d-bbbe-a1f5e2e9852e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.038 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.0394] manager: (tap75586f61-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.041 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.043 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.045 239853 INFO os_vif [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07')#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.091 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.091 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.091 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No VIF found with MAC fa:16:3e:30:d8:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.092 239853 INFO nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Using config drive#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.117 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.170 239853 DEBUG nova.network.neutron [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updated VIF entry in instance network info cache for port 75586f61-07ff-4cd0-9aa1-9845359a1fe6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.171 239853 DEBUG nova.network.neutron [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating instance_info_cache with network_info: [{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.185 239853 DEBUG oslo_concurrency.lockutils [req-d62a5ee7-c3d3-41b6-921f-3da5fe7623bf req-974d6859-a298-430a-817a-401ec72e3868 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.366 239853 INFO nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Creating config drive at /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.373 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpow29eidp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.504 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpow29eidp" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.538 239853 DEBUG nova.storage.rbd_utils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] rbd image 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.543 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.666 239853 DEBUG oslo_concurrency.processutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.667 239853 INFO nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Deleting local config drive /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e/disk.config because it was imported into RBD.#033[00m
Feb  2 13:00:48 np0005605476 kernel: tap75586f61-07: entered promiscuous mode
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.7215] manager: (tap75586f61-07): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Feb  2 13:00:48 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:48Z|00270|binding|INFO|Claiming lport 75586f61-07ff-4cd0-9aa1-9845359a1fe6 for this chassis.
Feb  2 13:00:48 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:48Z|00271|binding|INFO|75586f61-07ff-4cd0-9aa1-9845359a1fe6: Claiming fa:16:3e:30:d8:b7 10.100.0.7
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.723 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.728 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.730 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.743 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:d8:b7 10.100.0.7'], port_security=['fa:16:3e:30:d8:b7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3ba4448b-74c6-491d-bbbe-a1f5e2e9852e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '896604c79c574097a167451efa4ee5b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aab8f39a-c545-46f7-8ee0-60f614dcdcb6 be603530-4fe5-49e9-9381-63540b33bd98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5de96b67-0aa7-446e-91be-d1e0250aa316, chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=75586f61-07ff-4cd0-9aa1-9845359a1fe6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.744 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 75586f61-07ff-4cd0-9aa1-9845359a1fe6 in datapath 967fc097-5eb9-45d1-9d27-cd16a27cb74e bound to our chassis#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.745 155391 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 967fc097-5eb9-45d1-9d27-cd16a27cb74e#033[00m
Feb  2 13:00:48 np0005605476 systemd-machined[208080]: New machine qemu-29-instance-0000001d.
Feb  2 13:00:48 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:48Z|00272|binding|INFO|Setting lport 75586f61-07ff-4cd0-9aa1-9845359a1fe6 ovn-installed in OVS
Feb  2 13:00:48 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:48Z|00273|binding|INFO|Setting lport 75586f61-07ff-4cd0-9aa1-9845359a1fe6 up in Southbound
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.757 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[44997be3-6b58-4059-ac48-decd5ad3e874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.758 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap967fc097-51 in ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.760 246686 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap967fc097-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.760 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[43e5afd9-eb86-4da8-9894-6764e59149cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.760 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8c28e7-68a7-414b-a257-1a6324b694eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.760 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Feb  2 13:00:48 np0005605476 systemd-udevd[272032]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.773 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[c18d4c9c-b447-4c91-ae36-f34597813c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.784 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[422f35f2-72f9-4345-b001-20540ea965cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.7887] device (tap75586f61-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.7895] device (tap75586f61-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.808 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[92cbb903-42a8-440f-bb1f-e857d61c6005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.811 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[867ea82d-1dc1-4ad4-a5ca-f55b84c9b1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.8129] manager: (tap967fc097-50): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.832 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[ae14c070-0ce5-409e-844a-28fa618fcda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.836 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[6120c052-9371-46ef-881a-10afe36d44a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.8548] device (tap967fc097-50): carrier: link connected
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.861 246720 DEBUG oslo.privsep.daemon [-] privsep: reply[0e34b51c-cbf4-415f-9f4a-c48ddf8f5d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.875 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[618a2230-f813-4060-b9d9-a96a6e7799f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap967fc097-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:82:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449018, 'reachable_time': 26421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272063, 'error': None, 'target': 'ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.886 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[669fe9b4-7581-4a38-9355-3d815876821c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:8230'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449018, 'tstamp': 449018}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272064, 'error': None, 'target': 'ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.901 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[2540f0a2-786f-45de-b3a9-79cff9059cf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap967fc097-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:82:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449018, 'reachable_time': 26421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272065, 'error': None, 'target': 'ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.921 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7894780-ae17-4a78-a9f5-5c0d765c5827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.982 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccc1043-977d-4106-aa36-72e3d732cdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.983 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap967fc097-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.983 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.984 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap967fc097-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 kernel: tap967fc097-50: entered promiscuous mode
Feb  2 13:00:48 np0005605476 NetworkManager[49022]: <info>  [1770055248.9860] manager: (tap967fc097-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.985 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.987 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap967fc097-50, col_values=(('external_ids', {'iface-id': '28b2a3c2-071b-4b34-9bd5-287eeeabc012'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:00:48 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:48Z|00274|binding|INFO|Releasing lport 28b2a3c2-071b-4b34-9bd5-287eeeabc012 from this chassis (sb_readonly=0)
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.990 155391 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/967fc097-5eb9-45d1-9d27-cd16a27cb74e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/967fc097-5eb9-45d1-9d27-cd16a27cb74e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.991 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[12a0e15f-1aac-496c-ad8e-d07d2eefbfe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.992 155391 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: global
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    log         /dev/log local0 debug
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    log-tag     haproxy-metadata-proxy-967fc097-5eb9-45d1-9d27-cd16a27cb74e
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    user        root
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    group       root
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    maxconn     1024
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    pidfile     /var/lib/neutron/external/pids/967fc097-5eb9-45d1-9d27-cd16a27cb74e.pid.haproxy
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    daemon
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: defaults
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    log global
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    mode http
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    option httplog
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    option dontlognull
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    option http-server-close
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    option forwardfor
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    retries                 3
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    timeout http-request    30s
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    timeout connect         30s
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    timeout client          32s
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    timeout server          32s
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    timeout http-keep-alive 30s
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: listen listener
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    bind 169.254.169.254:80
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]:    http-request add-header X-OVN-Network-ID 967fc097-5eb9-45d1-9d27-cd16a27cb74e
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 13:00:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:00:48.993 155391 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'env', 'PROCESS_TAG=haproxy-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/967fc097-5eb9-45d1-9d27-cd16a27cb74e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 13:00:48 np0005605476 nova_compute[239846]: 2026-02-02 18:00:48.998 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.099 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055249.0990345, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.100 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] VM Started (Lifecycle Event)#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.121 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.126 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055249.0993285, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.126 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] VM Paused (Lifecycle Event)#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.147 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.151 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.175 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 13:00:49 np0005605476 nova_compute[239846]: 2026-02-02 18:00:49.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:49 np0005605476 podman[272138]: 2026-02-02 18:00:49.361543805 +0000 UTC m=+0.043388593 container create b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 13:00:49 np0005605476 systemd[1]: Started libpod-conmon-b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9.scope.
Feb  2 13:00:49 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:00:49 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4470fbe4a847ba9a68313204720d30a9b1db1e5bbd1bb42e33e8f582d285903/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 13:00:49 np0005605476 podman[272138]: 2026-02-02 18:00:49.336561142 +0000 UTC m=+0.018405970 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 13:00:49 np0005605476 podman[272138]: 2026-02-02 18:00:49.476605936 +0000 UTC m=+0.158450754 container init b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 13:00:49 np0005605476 podman[272138]: 2026-02-02 18:00:49.483715226 +0000 UTC m=+0.165560014 container start b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 13:00:49 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [NOTICE]   (272157) : New worker (272159) forked
Feb  2 13:00:49 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [NOTICE]   (272157) : Loading success.
Feb  2 13:00:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.433 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.529 239853 DEBUG nova.compute.manager [req-11b4d428-8eb1-454c-a0d3-f5b0acfe48c4 req-3347ada4-5eb5-47f9-b520-40bf218fcce1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.529 239853 DEBUG oslo_concurrency.lockutils [req-11b4d428-8eb1-454c-a0d3-f5b0acfe48c4 req-3347ada4-5eb5-47f9-b520-40bf218fcce1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.530 239853 DEBUG oslo_concurrency.lockutils [req-11b4d428-8eb1-454c-a0d3-f5b0acfe48c4 req-3347ada4-5eb5-47f9-b520-40bf218fcce1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.530 239853 DEBUG oslo_concurrency.lockutils [req-11b4d428-8eb1-454c-a0d3-f5b0acfe48c4 req-3347ada4-5eb5-47f9-b520-40bf218fcce1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.531 239853 DEBUG nova.compute.manager [req-11b4d428-8eb1-454c-a0d3-f5b0acfe48c4 req-3347ada4-5eb5-47f9-b520-40bf218fcce1 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Processing event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.532 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.537 239853 DEBUG nova.virt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Emitting event <LifecycleEvent: 1770055250.5372488, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.538 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] VM Resumed (Lifecycle Event)#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.541 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.546 239853 INFO nova.virt.libvirt.driver [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Instance spawned successfully.#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.547 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.565 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.575 239853 DEBUG nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.580 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.580 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.581 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.582 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.583 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.583 239853 DEBUG nova.virt.libvirt.driver [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.615 239853 INFO nova.compute.manager [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.654 239853 INFO nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Took 7.03 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.655 239853 DEBUG nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:00:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.723 239853 INFO nova.compute.manager [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Took 8.04 seconds to build instance.#033[00m
Feb  2 13:00:50 np0005605476 nova_compute[239846]: 2026-02-02 18:00:50.740 239853 DEBUG oslo_concurrency.lockutils [None req-8e59ca15-76f0-42cf-9c6f-767680058c69 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:00:52 np0005605476 NetworkManager[49022]: <info>  [1770055252.5082] manager: (patch-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Feb  2 13:00:52 np0005605476 NetworkManager[49022]: <info>  [1770055252.5087] manager: (patch-br-int-to-provnet-84933c65-96ea-4900-b5b8-d0e3462e1415): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.505 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.531 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:52 np0005605476 ovn_controller[146041]: 2026-02-02T18:00:52Z|00275|binding|INFO|Releasing lport 28b2a3c2-071b-4b34-9bd5-287eeeabc012 from this chassis (sb_readonly=0)
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.539 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.598 239853 DEBUG nova.compute.manager [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.599 239853 DEBUG oslo_concurrency.lockutils [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.599 239853 DEBUG oslo_concurrency.lockutils [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.599 239853 DEBUG oslo_concurrency.lockutils [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.599 239853 DEBUG nova.compute.manager [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] No waiting events found dispatching network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.599 239853 WARNING nova.compute.manager [req-8fb645bf-dc52-4fcc-a899-40f0e2937235 req-3ef622f6-3c32-4352-9951-7c5a83580149 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received unexpected event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 for instance with vm_state active and task_state None.#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.791 239853 DEBUG nova.compute.manager [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-changed-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.791 239853 DEBUG nova.compute.manager [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Refreshing instance network info cache due to event network-changed-75586f61-07ff-4cd0-9aa1-9845359a1fe6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.791 239853 DEBUG oslo_concurrency.lockutils [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.792 239853 DEBUG oslo_concurrency.lockutils [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquired lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 13:00:52 np0005605476 nova_compute[239846]: 2026-02-02 18:00:52.792 239853 DEBUG nova.network.neutron [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Refreshing network info cache for port 75586f61-07ff-4cd0-9aa1-9845359a1fe6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 13:00:53 np0005605476 nova_compute[239846]: 2026-02-02 18:00:53.087 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb  2 13:00:53 np0005605476 nova_compute[239846]: 2026-02-02 18:00:53.911 239853 DEBUG nova.network.neutron [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updated VIF entry in instance network info cache for port 75586f61-07ff-4cd0-9aa1-9845359a1fe6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 13:00:53 np0005605476 nova_compute[239846]: 2026-02-02 18:00:53.911 239853 DEBUG nova.network.neutron [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating instance_info_cache with network_info: [{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:00:53 np0005605476 nova_compute[239846]: 2026-02-02 18:00:53.934 239853 DEBUG oslo_concurrency.lockutils [req-87a6c201-dadb-4085-93c1-888a3ef4bfc6 req-8b71760b-1534-48b0-9691-8a7d80538333 e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Releasing lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 13:00:55 np0005605476 nova_compute[239846]: 2026-02-02 18:00:55.435 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:00:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 13:00:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb  2 13:00:58 np0005605476 nova_compute[239846]: 2026-02-02 18:00:58.088 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:00:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb  2 13:01:00 np0005605476 nova_compute[239846]: 2026-02-02 18:01:00.437 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:00 np0005605476 podman[272169]: 2026-02-02 18:01:00.596715677 +0000 UTC m=+0.047615059 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:01:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:01 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:01Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:d8:b7 10.100.0.7
Feb  2 13:01:01 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:01Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:d8:b7 10.100.0.7
Feb  2 13:01:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 327 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Feb  2 13:01:03 np0005605476 nova_compute[239846]: 2026-02-02 18:01:03.133 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:03 np0005605476 podman[272200]: 2026-02-02 18:01:03.175850742 +0000 UTC m=+0.110564460 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 13:01:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 327 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 79 op/s
Feb  2 13:01:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:01:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097734262' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:01:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:01:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097734262' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:01:05 np0005605476 nova_compute[239846]: 2026-02-02 18:01:05.440 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 350 MiB data, 678 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 350 MiB data, 678 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 13:01:08 np0005605476 nova_compute[239846]: 2026-02-02 18:01:08.136 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 350 MiB data, 678 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 13:01:10 np0005605476 nova_compute[239846]: 2026-02-02 18:01:10.442 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.495 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.495 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.514 239853 DEBUG nova.objects.instance [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.549 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 350 MiB data, 678 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.750 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.751 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.751 239853 INFO nova.compute.manager [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attaching volume 5b2227e6-ad31-4213-8c1f-2606b6cf1a21 to /dev/vdb#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.914 239853 DEBUG os_brick.utils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.915 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.925 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.926 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[a03800e0-2d47-4cb0-96e0-cfc50a1cf6ad]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.927 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.932 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.932 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[cc090525-00a7-4884-82b0-40d02d6ded3a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.933 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.938 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.938 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[05f4f222-c81b-443e-85b1-ea51e60c983e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.939 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[533bad9d-f8a8-4eb7-bb31-20f9b281ba8e]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.940 239853 DEBUG oslo_concurrency.processutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.957 239853 DEBUG oslo_concurrency.processutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.960 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.960 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.960 239853 DEBUG os_brick.initiator.connectors.lightos [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.961 239853 DEBUG os_brick.utils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] <== get_connector_properties: return (45ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 13:01:11 np0005605476 nova_compute[239846]: 2026-02-02 18:01:11.961 239853 DEBUG nova.virt.block_device [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating existing volume attachment record: 06099391-0fcc-4b8f-82cd-6addbe9abf29 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:01:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087377292' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:01:12 np0005605476 nova_compute[239846]: 2026-02-02 18:01:12.790 239853 DEBUG nova.objects.instance [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:12 np0005605476 nova_compute[239846]: 2026-02-02 18:01:12.838 239853 DEBUG nova.virt.libvirt.driver [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to attach volume 5b2227e6-ad31-4213-8c1f-2606b6cf1a21 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 13:01:12 np0005605476 nova_compute[239846]: 2026-02-02 18:01:12.841 239853 DEBUG nova.virt.libvirt.guest [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-5b2227e6-ad31-4213-8c1f-2606b6cf1a21">
Feb  2 13:01:12 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 13:01:12 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  </auth>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:12 np0005605476 nova_compute[239846]:  <serial>5b2227e6-ad31-4213-8c1f-2606b6cf1a21</serial>
Feb  2 13:01:12 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:12 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 13:01:12 np0005605476 podman[272379]: 2026-02-02 18:01:12.944391394 +0000 UTC m=+0.088590249 container create 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:01:12 np0005605476 podman[272379]: 2026-02-02 18:01:12.875523684 +0000 UTC m=+0.019722559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:13 np0005605476 systemd[1]: Started libpod-conmon-8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519.scope.
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.036 239853 DEBUG nova.virt.libvirt.driver [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.038 239853 DEBUG nova.virt.libvirt.driver [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.039 239853 DEBUG nova.virt.libvirt.driver [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.039 239853 DEBUG nova.virt.libvirt.driver [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No VIF found with MAC fa:16:3e:30:d8:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:01:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:13 np0005605476 podman[272379]: 2026-02-02 18:01:13.112654226 +0000 UTC m=+0.256853141 container init 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 13:01:13 np0005605476 podman[272379]: 2026-02-02 18:01:13.124085429 +0000 UTC m=+0.268284244 container start 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:01:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:01:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:13 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:01:13 np0005605476 hopeful_margulis[272416]: 167 167
Feb  2 13:01:13 np0005605476 systemd[1]: libpod-8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519.scope: Deactivated successfully.
Feb  2 13:01:13 np0005605476 podman[272379]: 2026-02-02 18:01:13.161407796 +0000 UTC m=+0.305606651 container attach 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.162 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:13 np0005605476 podman[272379]: 2026-02-02 18:01:13.163114194 +0000 UTC m=+0.307313019 container died 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:01:13 np0005605476 systemd[1]: var-lib-containers-storage-overlay-47fad1bb96a7a063d23158c420a663a18675c1b926e9e06e1e4b44fef54419a6-merged.mount: Deactivated successfully.
Feb  2 13:01:13 np0005605476 nova_compute[239846]: 2026-02-02 18:01:13.345 239853 DEBUG oslo_concurrency.lockutils [None req-8aa3a067-18c9-4ae0-883b-895ba517711a b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:13 np0005605476 podman[272379]: 2026-02-02 18:01:13.37886985 +0000 UTC m=+0.523068655 container remove 8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:01:13 np0005605476 systemd[1]: libpod-conmon-8667e370053163508e081ed68cd978a97ad98a845f1682f892ff3ba4fc45d519.scope: Deactivated successfully.
Feb  2 13:01:13 np0005605476 podman[272440]: 2026-02-02 18:01:13.507780639 +0000 UTC m=+0.022029035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 350 MiB data, 678 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb  2 13:01:13 np0005605476 podman[272440]: 2026-02-02 18:01:13.723846234 +0000 UTC m=+0.238094620 container create 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 13:01:13 np0005605476 systemd[1]: Started libpod-conmon-0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10.scope.
Feb  2 13:01:13 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:13 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:13 np0005605476 podman[272440]: 2026-02-02 18:01:13.866521392 +0000 UTC m=+0.380769798 container init 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 13:01:13 np0005605476 podman[272440]: 2026-02-02 18:01:13.873668144 +0000 UTC m=+0.387916510 container start 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 13:01:13 np0005605476 podman[272440]: 2026-02-02 18:01:13.905827734 +0000 UTC m=+0.420076140 container attach 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:01:14 np0005605476 vibrant_mclaren[272456]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:01:14 np0005605476 vibrant_mclaren[272456]: --> All data devices are unavailable
Feb  2 13:01:14 np0005605476 systemd[1]: libpod-0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10.scope: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272440]: 2026-02-02 18:01:14.315690905 +0000 UTC m=+0.829939291 container died 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 13:01:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-76c55d7d789d8b98bddb815ccf6c4c022249b1ef2db1de6e73baf07cda9c6907-merged.mount: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272440]: 2026-02-02 18:01:14.361174572 +0000 UTC m=+0.875422948 container remove 0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:01:14 np0005605476 systemd[1]: libpod-conmon-0c319fcc350629acddab893d9135f7b835be765d3e7ab2c78b3e18e2be608a10.scope: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.766234736 +0000 UTC m=+0.039867979 container create 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 13:01:14 np0005605476 systemd[1]: Started libpod-conmon-1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f.scope.
Feb  2 13:01:14 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.82749451 +0000 UTC m=+0.101127753 container init 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.831140473 +0000 UTC m=+0.104773716 container start 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.833915491 +0000 UTC m=+0.107548734 container attach 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 13:01:14 np0005605476 affectionate_cannon[272566]: 167 167
Feb  2 13:01:14 np0005605476 systemd[1]: libpod-1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f.scope: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.836026531 +0000 UTC m=+0.109659774 container died 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.749866523 +0000 UTC m=+0.023499806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:14 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9b2c46b0c25b1a2f13539dbc3706960f069b66fb924af06ee18243a752ee583e-merged.mount: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272550]: 2026-02-02 18:01:14.865731242 +0000 UTC m=+0.139364485 container remove 1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 13:01:14 np0005605476 systemd[1]: libpod-conmon-1da7f7af146ed8591d80844d618e338ffae98ff7409ef4fdc35561276d8fe39f.scope: Deactivated successfully.
Feb  2 13:01:14 np0005605476 podman[272590]: 2026-02-02 18:01:14.999770746 +0000 UTC m=+0.051947922 container create a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:01:15 np0005605476 systemd[1]: Started libpod-conmon-a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c.scope.
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:14.979375828 +0000 UTC m=+0.031553044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4270b8cf2d250da58c9bb695046297f5605c63c3d81ea12b0322f25f3fb8990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4270b8cf2d250da58c9bb695046297f5605c63c3d81ea12b0322f25f3fb8990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4270b8cf2d250da58c9bb695046297f5605c63c3d81ea12b0322f25f3fb8990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:15 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4270b8cf2d250da58c9bb695046297f5605c63c3d81ea12b0322f25f3fb8990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:15.109597374 +0000 UTC m=+0.161774560 container init a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:15.118524447 +0000 UTC m=+0.170701623 container start a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:15.122114368 +0000 UTC m=+0.174291534 container attach a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]: {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    "0": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "devices": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "/dev/loop3"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            ],
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_name": "ceph_lv0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_size": "21470642176",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "name": "ceph_lv0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "tags": {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_name": "ceph",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.crush_device_class": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.encrypted": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.objectstore": "bluestore",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_id": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.vdo": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.with_tpm": "0"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            },
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "vg_name": "ceph_vg0"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        }
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    ],
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    "1": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "devices": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "/dev/loop4"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            ],
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_name": "ceph_lv1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_size": "21470642176",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "name": "ceph_lv1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "tags": {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_name": "ceph",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.crush_device_class": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.encrypted": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.objectstore": "bluestore",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_id": "1",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.vdo": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.with_tpm": "0"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            },
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "vg_name": "ceph_vg1"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        }
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    ],
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    "2": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "devices": [
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "/dev/loop5"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            ],
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_name": "ceph_lv2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_size": "21470642176",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "name": "ceph_lv2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "tags": {
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.cluster_name": "ceph",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.crush_device_class": "",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.encrypted": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.objectstore": "bluestore",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osd_id": "2",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.vdo": "0",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:                "ceph.with_tpm": "0"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            },
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "type": "block",
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:            "vg_name": "ceph_vg2"
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:        }
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]:    ]
Feb  2 13:01:15 np0005605476 dazzling_satoshi[272607]: }
Feb  2 13:01:15 np0005605476 systemd[1]: libpod-a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c.scope: Deactivated successfully.
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:15.413585338 +0000 UTC m=+0.465762514 container died a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:01:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d4270b8cf2d250da58c9bb695046297f5605c63c3d81ea12b0322f25f3fb8990-merged.mount: Deactivated successfully.
Feb  2 13:01:15 np0005605476 podman[272590]: 2026-02-02 18:01:15.450523332 +0000 UTC m=+0.502700488 container remove a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 13:01:15 np0005605476 nova_compute[239846]: 2026-02-02 18:01:15.498 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:15 np0005605476 systemd[1]: libpod-conmon-a12efef239f50995b917ccdbb1c296dfff2c9ce3a0b6774a8c3f86b837919a2c.scope: Deactivated successfully.
Feb  2 13:01:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 351 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 1.2 MiB/s wr, 59 op/s
Feb  2 13:01:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:01:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 30K writes, 114K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.04 MB/s#012Cumulative WAL: 30K writes, 11K syncs, 2.64 writes per sync, written: 0.08 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 36.87 MB, 0.06 MB/s#012Interval WAL: 11K writes, 5142 syncs, 2.33 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.881547581 +0000 UTC m=+0.039192780 container create d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 13:01:15 np0005605476 systemd[1]: Started libpod-conmon-d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5.scope.
Feb  2 13:01:15 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.955487034 +0000 UTC m=+0.113132253 container init d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.861254197 +0000 UTC m=+0.018899416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.960344111 +0000 UTC m=+0.117989320 container start d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.963868771 +0000 UTC m=+0.121514000 container attach d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 13:01:15 np0005605476 peaceful_wing[272706]: 167 167
Feb  2 13:01:15 np0005605476 systemd[1]: libpod-d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5.scope: Deactivated successfully.
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.965575219 +0000 UTC m=+0.123220428 container died d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 13:01:15 np0005605476 systemd[1]: var-lib-containers-storage-overlay-ccbc73a2ffc8bb6fcd6525262c00815ffee517bb9603474f0af2d0773576cfcd-merged.mount: Deactivated successfully.
Feb  2 13:01:15 np0005605476 podman[272690]: 2026-02-02 18:01:15.998574343 +0000 UTC m=+0.156219542 container remove d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:01:16 np0005605476 systemd[1]: libpod-conmon-d1a7eb98a4931dc366cbcb1b037043f8d51f3659e83b3bcbbf49790161bc20b5.scope: Deactivated successfully.
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.112388064 +0000 UTC m=+0.031939155 container create 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Feb  2 13:01:16 np0005605476 systemd[1]: Started libpod-conmon-480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4.scope.
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Feb  2 13:01:16 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:01:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2243cb01fe8e96e6675d227d3403042e5ee8d9fc8b3a4cb6d69f84af3b257a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2243cb01fe8e96e6675d227d3403042e5ee8d9fc8b3a4cb6d69f84af3b257a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2243cb01fe8e96e6675d227d3403042e5ee8d9fc8b3a4cb6d69f84af3b257a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:16 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2243cb01fe8e96e6675d227d3403042e5ee8d9fc8b3a4cb6d69f84af3b257a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.178712122 +0000 UTC m=+0.098263233 container init 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.184031412 +0000 UTC m=+0.103582503 container start 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.186837822 +0000 UTC m=+0.106388933 container attach 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.098624025 +0000 UTC m=+0.018175136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:01:16 np0005605476 lvm[272825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:01:16 np0005605476 lvm[272824]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:01:16 np0005605476 lvm[272824]: VG ceph_vg0 finished
Feb  2 13:01:16 np0005605476 lvm[272825]: VG ceph_vg1 finished
Feb  2 13:01:16 np0005605476 lvm[272827]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:01:16 np0005605476 lvm[272827]: VG ceph_vg2 finished
Feb  2 13:01:16 np0005605476 boring_goldstine[272746]: {}
Feb  2 13:01:16 np0005605476 systemd[1]: libpod-480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4.scope: Deactivated successfully.
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.868370471 +0000 UTC m=+0.787921562 container died 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 13:01:16 np0005605476 systemd[1]: var-lib-containers-storage-overlay-2243cb01fe8e96e6675d227d3403042e5ee8d9fc8b3a4cb6d69f84af3b257a46-merged.mount: Deactivated successfully.
Feb  2 13:01:16 np0005605476 podman[272729]: 2026-02-02 18:01:16.904979597 +0000 UTC m=+0.824530688 container remove 480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_goldstine, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:01:16 np0005605476 systemd[1]: libpod-conmon-480e74d69fbb464127761de112805d2bb0aae408d8b364f79a0fb2426a87afc4.scope: Deactivated successfully.
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:01:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:01:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 351 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 KiB/s wr, 11 op/s
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.161823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278161851, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2280, "num_deletes": 257, "total_data_size": 3458846, "memory_usage": 3525104, "flush_reason": "Manual Compaction"}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb  2 13:01:18 np0005605476 nova_compute[239846]: 2026-02-02 18:01:18.165 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278175742, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3395424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31889, "largest_seqno": 34167, "table_properties": {"data_size": 3384725, "index_size": 6937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22295, "raw_average_key_size": 20, "raw_value_size": 3363328, "raw_average_value_size": 3166, "num_data_blocks": 301, "num_entries": 1062, "num_filter_entries": 1062, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055095, "oldest_key_time": 1770055095, "file_creation_time": 1770055278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 13984 microseconds, and 5968 cpu microseconds.
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.175802) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3395424 bytes OK
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.175824) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.177798) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.177815) EVENT_LOG_v1 {"time_micros": 1770055278177810, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.177834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3449109, prev total WAL file size 3449109, number of live WAL files 2.
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.178464) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3315KB)], [65(10MB)]
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278178492, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13902863, "oldest_snapshot_seqno": -1}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6771 keys, 12089456 bytes, temperature: kUnknown
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278234528, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12089456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12036364, "index_size": 35060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 169571, "raw_average_key_size": 25, "raw_value_size": 11907094, "raw_average_value_size": 1758, "num_data_blocks": 1406, "num_entries": 6771, "num_filter_entries": 6771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.234828) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12089456 bytes
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.236509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.7 rd, 215.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7296, records dropped: 525 output_compression: NoCompression
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.236532) EVENT_LOG_v1 {"time_micros": 1770055278236520, "job": 36, "event": "compaction_finished", "compaction_time_micros": 56123, "compaction_time_cpu_micros": 27823, "output_level": 6, "num_output_files": 1, "total_output_size": 12089456, "num_input_records": 7296, "num_output_records": 6771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278237008, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055278238084, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.178420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.238129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.238134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.238137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.238139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:18 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:01:18.238141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:01:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:01:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.69 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 32.67 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4939 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:01:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 352 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 105 KiB/s wr, 28 op/s
Feb  2 13:01:20 np0005605476 nova_compute[239846]: 2026-02-02 18:01:20.501 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Feb  2 13:01:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Feb  2 13:01:21 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Feb  2 13:01:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 352 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 71 KiB/s wr, 62 op/s
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.713 239853 DEBUG oslo_concurrency.lockutils [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.713 239853 DEBUG oslo_concurrency.lockutils [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.727 239853 INFO nova.compute.manager [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Detaching volume 5b2227e6-ad31-4213-8c1f-2606b6cf1a21#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.832 239853 INFO nova.virt.block_device [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to driver detach volume 5b2227e6-ad31-4213-8c1f-2606b6cf1a21 from mountpoint /dev/vdb#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.842 239853 DEBUG nova.virt.libvirt.driver [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Attempting to detach device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.843 239853 DEBUG nova.virt.libvirt.guest [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-5b2227e6-ad31-4213-8c1f-2606b6cf1a21">
Feb  2 13:01:21 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <serial>5b2227e6-ad31-4213-8c1f-2606b6cf1a21</serial>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:21 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.851 239853 INFO nova.virt.libvirt.driver [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config.#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.852 239853 DEBUG nova.virt.libvirt.driver [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.853 239853 DEBUG nova.virt.libvirt.guest [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-5b2227e6-ad31-4213-8c1f-2606b6cf1a21">
Feb  2 13:01:21 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <serial>5b2227e6-ad31-4213-8c1f-2606b6cf1a21</serial>
Feb  2 13:01:21 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:21 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:21 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.957 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770055281.9571908, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.959 239853 DEBUG nova.virt.libvirt.driver [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 13:01:21 np0005605476 nova_compute[239846]: 2026-02-02 18:01:21.961 239853 INFO nova.virt.libvirt.driver [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config.#033[00m
Feb  2 13:01:22 np0005605476 nova_compute[239846]: 2026-02-02 18:01:22.103 239853 DEBUG nova.objects.instance [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:22 np0005605476 nova_compute[239846]: 2026-02-02 18:01:22.134 239853 DEBUG oslo_concurrency.lockutils [None req-7e19fd00-2d1d-45e9-a412-589dc4cb1365 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:22 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:22Z|00276|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Feb  2 13:01:23 np0005605476 nova_compute[239846]: 2026-02-02 18:01:23.226 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:01:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 93K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 8783 syncs, 2.73 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8971 writes, 32K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 26.58 MB, 0.04 MB/s#012Interval WAL: 8971 writes, 3901 syncs, 2.30 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:01:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 352 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 56 KiB/s wr, 49 op/s
Feb  2 13:01:24 np0005605476 nova_compute[239846]: 2026-02-02 18:01:24.843 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:24 np0005605476 nova_compute[239846]: 2026-02-02 18:01:24.843 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:24 np0005605476 nova_compute[239846]: 2026-02-02 18:01:24.864 239853 DEBUG nova.objects.instance [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:24 np0005605476 nova_compute[239846]: 2026-02-02 18:01:24.903 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.175 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.176 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.176 239853 INFO nova.compute.manager [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attaching volume 861c9859-b1ea-488e-850c-4d96385cbd5a to /dev/vdb#033[00m
Feb  2 13:01:25 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.317 239853 DEBUG os_brick.utils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.318 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.326 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.327 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[7655d88c-5e3e-47e2-9eb7-d95854016956]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.328 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.333 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.333 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6a2015-b1c1-483b-afc8-7636bd7a46a2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.334 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.342 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.342 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5c5ae8-5b51-4d1f-a282-da2ef63d6f60]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.344 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0aba0785-0659-4340-a72d-45b521e9d7be]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.344 239853 DEBUG oslo_concurrency.processutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.360 239853 DEBUG oslo_concurrency.processutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.362 239853 DEBUG os_brick.initiator.connectors.lightos [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.362 239853 DEBUG os_brick.initiator.connectors.lightos [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.363 239853 DEBUG os_brick.initiator.connectors.lightos [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.363 239853 DEBUG os_brick.utils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.363 239853 DEBUG nova.virt.block_device [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating existing volume attachment record: 97871d1f-7e97-4db3-a215-5f519809b2ff _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 13:01:25 np0005605476 nova_compute[239846]: 2026-02-02 18:01:25.508 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 353 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 76 KiB/s wr, 79 op/s
Feb  2 13:01:26 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:01:26 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4158049187' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.095 239853 DEBUG nova.objects.instance [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.121 239853 DEBUG nova.virt.libvirt.driver [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to attach volume 861c9859-b1ea-488e-850c-4d96385cbd5a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.125 239853 DEBUG nova.virt.libvirt.guest [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-861c9859-b1ea-488e-850c-4d96385cbd5a">
Feb  2 13:01:26 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 13:01:26 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  </auth>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:26 np0005605476 nova_compute[239846]:  <serial>861c9859-b1ea-488e-850c-4d96385cbd5a</serial>
Feb  2 13:01:26 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:26 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.222 239853 DEBUG nova.virt.libvirt.driver [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.223 239853 DEBUG nova.virt.libvirt.driver [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.223 239853 DEBUG nova.virt.libvirt.driver [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.223 239853 DEBUG nova.virt.libvirt.driver [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No VIF found with MAC fa:16:3e:30:d8:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:01:26 np0005605476 nova_compute[239846]: 2026-02-02 18:01:26.371 239853 DEBUG oslo_concurrency.lockutils [None req-a91f95d8-5c51-40a4-9d42-aa0b3fafd619 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 353 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 64 KiB/s wr, 66 op/s
Feb  2 13:01:28 np0005605476 nova_compute[239846]: 2026-02-02 18:01:28.227 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.220 239853 DEBUG oslo_concurrency.lockutils [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.221 239853 DEBUG oslo_concurrency.lockutils [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.233 239853 INFO nova.compute.manager [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Detaching volume 861c9859-b1ea-488e-850c-4d96385cbd5a#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.385 239853 INFO nova.virt.block_device [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to driver detach volume 861c9859-b1ea-488e-850c-4d96385cbd5a from mountpoint /dev/vdb#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.393 239853 DEBUG nova.virt.libvirt.driver [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Attempting to detach device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.394 239853 DEBUG nova.virt.libvirt.guest [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-861c9859-b1ea-488e-850c-4d96385cbd5a">
Feb  2 13:01:29 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <serial>861c9859-b1ea-488e-850c-4d96385cbd5a</serial>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:29 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.401 239853 INFO nova.virt.libvirt.driver [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config.#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.401 239853 DEBUG nova.virt.libvirt.driver [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.402 239853 DEBUG nova.virt.libvirt.guest [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-861c9859-b1ea-488e-850c-4d96385cbd5a">
Feb  2 13:01:29 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <serial>861c9859-b1ea-488e-850c-4d96385cbd5a</serial>
Feb  2 13:01:29 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:29 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:29 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.505 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770055289.505674, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.507 239853 DEBUG nova.virt.libvirt.driver [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.509 239853 INFO nova.virt.libvirt.driver [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config.#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.657 239853 DEBUG nova.objects.instance [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:29 np0005605476 nova_compute[239846]: 2026-02-02 18:01:29.687 239853 DEBUG oslo_concurrency.lockutils [None req-9b7dace9-c4f5-44e2-a6f8-30876171a29e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 353 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 58 KiB/s wr, 57 op/s
Feb  2 13:01:30 np0005605476 nova_compute[239846]: 2026-02-02 18:01:30.509 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:31 np0005605476 podman[272900]: 2026-02-02 18:01:31.617770743 +0000 UTC m=+0.051353545 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 13:01:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 82 KiB/s wr, 40 op/s
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.201 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.201 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.216 239853 DEBUG nova.objects.instance [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.245 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.404 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.405 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.405 239853 INFO nova.compute.manager [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attaching volume 6b2bb6ad-3800-4c34-997e-8c27260eb330 to /dev/vdb#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.527 239853 DEBUG os_brick.utils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.528 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.539 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.539 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[89fef947-a1d2-494d-9f73-c8939ab6073b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.541 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.548 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.549 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d3640e-d682-4972-bac4-07d3b64585ca]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.550 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.559 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.559 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[5e85a083-9d90-484c-9feb-dbb0c91d8419]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.560 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[98b0f757-81fa-4f0d-81c9-67241d24f123]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.561 239853 DEBUG oslo_concurrency.processutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.577 239853 DEBUG oslo_concurrency.processutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.579 239853 DEBUG os_brick.initiator.connectors.lightos [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.580 239853 DEBUG os_brick.initiator.connectors.lightos [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.580 239853 DEBUG os_brick.initiator.connectors.lightos [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.581 239853 DEBUG os_brick.utils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 13:01:32 np0005605476 nova_compute[239846]: 2026-02-02 18:01:32.581 239853 DEBUG nova.virt.block_device [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating existing volume attachment record: 1c246a94-0bdd-4c07-80f0-6b35766d4dbe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.229 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:33 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:01:33 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/343598485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.421 239853 DEBUG nova.objects.instance [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.441 239853 DEBUG nova.virt.libvirt.driver [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to attach volume 6b2bb6ad-3800-4c34-997e-8c27260eb330 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.444 239853 DEBUG nova.virt.libvirt.guest [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6b2bb6ad-3800-4c34-997e-8c27260eb330">
Feb  2 13:01:33 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 13:01:33 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  </auth>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:33 np0005605476 nova_compute[239846]:  <serial>6b2bb6ad-3800-4c34-997e-8c27260eb330</serial>
Feb  2 13:01:33 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:33 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.545 239853 DEBUG nova.virt.libvirt.driver [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.545 239853 DEBUG nova.virt.libvirt.driver [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.545 239853 DEBUG nova.virt.libvirt.driver [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.546 239853 DEBUG nova.virt.libvirt.driver [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No VIF found with MAC fa:16:3e:30:d8:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:01:33 np0005605476 podman[272947]: 2026-02-02 18:01:33.630586389 +0000 UTC m=+0.073795959 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  2 13:01:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 72 KiB/s wr, 35 op/s
Feb  2 13:01:33 np0005605476 nova_compute[239846]: 2026-02-02 18:01:33.718 239853 DEBUG oslo_concurrency.lockutils [None req-d4dcbd48-737a-4fca-b29b-447cb635c068 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:35 np0005605476 nova_compute[239846]: 2026-02-02 18:01:35.536 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 78 KiB/s wr, 58 op/s
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.187 239853 DEBUG oslo_concurrency.lockutils [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.188 239853 DEBUG oslo_concurrency.lockutils [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.205 239853 INFO nova.compute.manager [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Detaching volume 6b2bb6ad-3800-4c34-997e-8c27260eb330#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.310 239853 INFO nova.virt.block_device [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to driver detach volume 6b2bb6ad-3800-4c34-997e-8c27260eb330 from mountpoint /dev/vdb#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.319 239853 DEBUG nova.virt.libvirt.driver [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Attempting to detach device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.320 239853 DEBUG nova.virt.libvirt.guest [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6b2bb6ad-3800-4c34-997e-8c27260eb330">
Feb  2 13:01:36 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <serial>6b2bb6ad-3800-4c34-997e-8c27260eb330</serial>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:36 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.373 239853 INFO nova.virt.libvirt.driver [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config.#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.373 239853 DEBUG nova.virt.libvirt.driver [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.373 239853 DEBUG nova.virt.libvirt.guest [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-6b2bb6ad-3800-4c34-997e-8c27260eb330">
Feb  2 13:01:36 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <serial>6b2bb6ad-3800-4c34-997e-8c27260eb330</serial>
Feb  2 13:01:36 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:36 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:36 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.433 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770055296.4330914, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.435 239853 DEBUG nova.virt.libvirt.driver [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.436 239853 INFO nova.virt.libvirt.driver [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config.#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.606 239853 DEBUG nova.objects.instance [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:36 np0005605476 nova_compute[239846]: 2026-02-02 18:01:36.643 239853 DEBUG oslo_concurrency.lockutils [None req-de8136f8-251e-4942-9d4a-f918c38266b1 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:01:36
Feb  2 13:01:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:01:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:01:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups']
Feb  2 13:01:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 63 KiB/s wr, 36 op/s
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:01:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:01:38 np0005605476 nova_compute[239846]: 2026-02-02 18:01:38.231 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.195 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.196 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.215 239853 DEBUG nova.objects.instance [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.255 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.540 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.540 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.541 239853 INFO nova.compute.manager [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attaching volume a335f4e5-9320-4bb1-83eb-3f0bad725427 to /dev/vdb#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.665 239853 DEBUG os_brick.utils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.666 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.676 249256 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.677 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[0e284230-1fc4-4bda-82a4-31fcbed19387]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.678 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.684 249256 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.684 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[959608b8-11d1-4318-82e2-d6b69c5ce620]: (4, ('InitiatorName=iqn.1994-05.com.redhat:68bef92c4cc9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.686 249256 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.695 249256 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.695 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[6f33d7ce-c2e7-4d3a-981b-9c8d7b519b3e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.697 249256 DEBUG oslo.privsep.daemon [-] privsep: reply[09982e64-b855-4e69-a1ca-9481481d9a5c]: (4, 'cb1779c6-d1fa-4b89-a494-cd579a1210f6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.697 239853 DEBUG oslo_concurrency.processutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 120 KiB/s wr, 46 op/s
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.715 239853 DEBUG oslo_concurrency.processutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.717 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.717 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.718 239853 DEBUG os_brick.initiator.connectors.lightos [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.718 239853 DEBUG os_brick.utils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:68bef92c4cc9', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'cb1779c6-d1fa-4b89-a494-cd579a1210f6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 13:01:39 np0005605476 nova_compute[239846]: 2026-02-02 18:01:39.718 239853 DEBUG nova.virt.block_device [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating existing volume attachment record: b623d895-2d99-4622-966d-52273da29791 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 13:01:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 13:01:40 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777161158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.538 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.669 239853 DEBUG nova.objects.instance [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.740 239853 DEBUG nova.virt.libvirt.driver [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to attach volume a335f4e5-9320-4bb1-83eb-3f0bad725427 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.742 239853 DEBUG nova.virt.libvirt.guest [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a335f4e5-9320-4bb1-83eb-3f0bad725427">
Feb  2 13:01:40 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  <auth username="openstack">
Feb  2 13:01:40 np0005605476 nova_compute[239846]:    <secret type="ceph" uuid="eb48d0ef-3496-563c-b73d-661fb962013e"/>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  </auth>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:40 np0005605476 nova_compute[239846]:  <serial>a335f4e5-9320-4bb1-83eb-3f0bad725427</serial>
Feb  2 13:01:40 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:40 np0005605476 nova_compute[239846]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.974 239853 DEBUG nova.virt.libvirt.driver [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.975 239853 DEBUG nova.virt.libvirt.driver [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.975 239853 DEBUG nova.virt.libvirt.driver [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 13:01:40 np0005605476 nova_compute[239846]: 2026-02-02 18:01:40.975 239853 DEBUG nova.virt.libvirt.driver [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] No VIF found with MAC fa:16:3e:30:d8:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 13:01:41 np0005605476 nova_compute[239846]: 2026-02-02 18:01:41.222 239853 DEBUG oslo_concurrency.lockutils [None req-6dcf44fd-8b3e-46b1-a8f7-839ce1f5765e b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 111 KiB/s wr, 50 op/s
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.293 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.531 239853 DEBUG oslo_concurrency.lockutils [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.532 239853 DEBUG oslo_concurrency.lockutils [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.562 239853 INFO nova.compute.manager [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Detaching volume a335f4e5-9320-4bb1-83eb-3f0bad725427#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.687 239853 INFO nova.virt.block_device [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Attempting to driver detach volume a335f4e5-9320-4bb1-83eb-3f0bad725427 from mountpoint /dev/vdb#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.697 239853 DEBUG nova.virt.libvirt.driver [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Attempting to detach device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.698 239853 DEBUG nova.virt.libvirt.guest [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a335f4e5-9320-4bb1-83eb-3f0bad725427">
Feb  2 13:01:43 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <serial>a335f4e5-9320-4bb1-83eb-3f0bad725427</serial>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:43 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 64 KiB/s wr, 41 op/s
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.783 239853 INFO nova.virt.libvirt.driver [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the persistent domain config.#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.784 239853 DEBUG nova.virt.libvirt.driver [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.785 239853 DEBUG nova.virt.libvirt.guest [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <source protocol="rbd" name="volumes/volume-a335f4e5-9320-4bb1-83eb-3f0bad725427">
Feb  2 13:01:43 np0005605476 nova_compute[239846]:    <host name="192.168.122.100" port="6789"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  </source>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <target dev="vdb" bus="virtio"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <serial>a335f4e5-9320-4bb1-83eb-3f0bad725427</serial>
Feb  2 13:01:43 np0005605476 nova_compute[239846]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 13:01:43 np0005605476 nova_compute[239846]: </disk>
Feb  2 13:01:43 np0005605476 nova_compute[239846]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.894 239853 DEBUG nova.virt.libvirt.driver [None req-75407507-981f-4e36-a666-99ea42d55868 - - - - - -] Received event <DeviceRemovedEvent: 1770055303.8942678, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.896 239853 DEBUG nova.virt.libvirt.driver [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 13:01:43 np0005605476 nova_compute[239846]: 2026-02-02 18:01:43.899 239853 INFO nova.virt.libvirt.driver [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully detached device vdb from instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e from the live domain config.#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.126 239853 DEBUG nova.objects.instance [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'flavor' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.175 239853 DEBUG oslo_concurrency.lockutils [None req-c36cd77f-9149-40d5-bf7d-788f9a32dcce b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.240 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.240 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.263 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.263 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.263 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.263 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.264 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:01:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797095854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.809 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.878 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 13:01:44 np0005605476 nova_compute[239846]: 2026-02-02 18:01:44.878 239853 DEBUG nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.019 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.020 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4073MB free_disk=59.942051788792014GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.020 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.021 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.094 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.094 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.095 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.133 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3422314404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3422314404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.540 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457534760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:01:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 355 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 122 KiB/s wr, 61 op/s
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.712 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.716 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.730 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.749 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:01:45 np0005605476 nova_compute[239846]: 2026-02-02 18:01:45.749 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:46.653 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:46.655 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:46.656 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:01:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560705577' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:01:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:01:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560705577' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007710906764822422 of space, bias 1.0, pg target 0.23132720294467266 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00296077587303802 of space, bias 1.0, pg target 0.888232761911406 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.3452023273102124e-06 of space, bias 1.0, pg target 0.0007035606981930637 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664325430685556 of space, bias 1.0, pg target 0.1999297629205667 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.696030435116874e-07 of space, bias 4.0, pg target 0.0011635236522140248 quantized to 16 (current 16)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:01:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 355 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 116 KiB/s wr, 39 op/s
Feb  2 13:01:47 np0005605476 nova_compute[239846]: 2026-02-02 18:01:47.750 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:01:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399581661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:01:48 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:01:48 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399581661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.296 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.799 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.799 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquired lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.800 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 13:01:48 np0005605476 nova_compute[239846]: 2026-02-02 18:01:48.800 239853 DEBUG nova.objects.instance [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Feb  2 13:01:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Feb  2 13:01:49 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Feb  2 13:01:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 71 KiB/s wr, 71 op/s
Feb  2 13:01:50 np0005605476 nova_compute[239846]: 2026-02-02 18:01:50.038 239853 DEBUG nova.network.neutron [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating instance_info_cache with network_info: [{"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:01:50 np0005605476 nova_compute[239846]: 2026-02-02 18:01:50.059 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Releasing lock "refresh_cache-3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 13:01:50 np0005605476 nova_compute[239846]: 2026-02-02 18:01:50.060 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 13:01:50 np0005605476 nova_compute[239846]: 2026-02-02 18:01:50.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:50 np0005605476 nova_compute[239846]: 2026-02-02 18:01:50.542 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:51 np0005605476 nova_compute[239846]: 2026-02-02 18:01:51.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:51 np0005605476 nova_compute[239846]: 2026-02-02 18:01:51.258 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Feb  2 13:01:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Feb  2 13:01:51 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Feb  2 13:01:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 91 KiB/s wr, 138 op/s
Feb  2 13:01:52 np0005605476 nova_compute[239846]: 2026-02-02 18:01:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:01:52 np0005605476 nova_compute[239846]: 2026-02-02 18:01:52.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:01:53 np0005605476 nova_compute[239846]: 2026-02-02 18:01:53.339 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Feb  2 13:01:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Feb  2 13:01:53 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Feb  2 13:01:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 5.3 KiB/s wr, 144 op/s
Feb  2 13:01:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:01:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162032234' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:01:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:01:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4162032234' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.378 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.378 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.379 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.379 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.380 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.382 239853 INFO nova.compute.manager [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Terminating instance#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.383 239853 DEBUG nova.compute.manager [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 13:01:55 np0005605476 kernel: tap75586f61-07 (unregistering): left promiscuous mode
Feb  2 13:01:55 np0005605476 NetworkManager[49022]: <info>  [1770055315.4377] device (tap75586f61-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 13:01:55 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:55Z|00277|binding|INFO|Releasing lport 75586f61-07ff-4cd0-9aa1-9845359a1fe6 from this chassis (sb_readonly=0)
Feb  2 13:01:55 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:55Z|00278|binding|INFO|Setting lport 75586f61-07ff-4cd0-9aa1-9845359a1fe6 down in Southbound
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.441 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 ovn_controller[146041]: 2026-02-02T18:01:55Z|00279|binding|INFO|Removing iface tap75586f61-07 ovn-installed in OVS
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.444 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.452 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.454 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:d8:b7 10.100.0.7'], port_security=['fa:16:3e:30:d8:b7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3ba4448b-74c6-491d-bbbe-a1f5e2e9852e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '896604c79c574097a167451efa4ee5b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aab8f39a-c545-46f7-8ee0-60f614dcdcb6 be603530-4fe5-49e9-9381-63540b33bd98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5de96b67-0aa7-446e-91be-d1e0250aa316, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>], logical_port=75586f61-07ff-4cd0-9aa1-9845359a1fe6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc771e38b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.456 155391 INFO neutron.agent.ovn.metadata.agent [-] Port 75586f61-07ff-4cd0-9aa1-9845359a1fe6 in datapath 967fc097-5eb9-45d1-9d27-cd16a27cb74e unbound from our chassis#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.457 155391 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 967fc097-5eb9-45d1-9d27-cd16a27cb74e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.458 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a40838e-8213-445e-bc24-bddced7addd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.459 155391 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e namespace which is not needed anymore#033[00m
Feb  2 13:01:55 np0005605476 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb  2 13:01:55 np0005605476 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 15.000s CPU time.
Feb  2 13:01:55 np0005605476 systemd-machined[208080]: Machine qemu-29-instance-0000001d terminated.
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.565 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [NOTICE]   (272157) : haproxy version is 2.8.14-c23fe91
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [NOTICE]   (272157) : path to executable is /usr/sbin/haproxy
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [WARNING]  (272157) : Exiting Master process...
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [WARNING]  (272157) : Exiting Master process...
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [ALERT]    (272157) : Current worker (272159) exited with code 143 (Terminated)
Feb  2 13:01:55 np0005605476 neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e[272153]: [WARNING]  (272157) : All workers exited. Exiting... (0)
Feb  2 13:01:55 np0005605476 systemd[1]: libpod-b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9.scope: Deactivated successfully.
Feb  2 13:01:55 np0005605476 podman[273075]: 2026-02-02 18:01:55.577983536 +0000 UTC m=+0.051743325 container died b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 13:01:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9-userdata-shm.mount: Deactivated successfully.
Feb  2 13:01:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d4470fbe4a847ba9a68313204720d30a9b1db1e5bbd1bb42e33e8f582d285903-merged.mount: Deactivated successfully.
Feb  2 13:01:55 np0005605476 podman[273075]: 2026-02-02 18:01:55.614069748 +0000 UTC m=+0.087829537 container cleanup b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.615 239853 INFO nova.virt.libvirt.driver [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Instance destroyed successfully.#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.616 239853 DEBUG nova.objects.instance [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lazy-loading 'resources' on Instance uuid 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 13:01:55 np0005605476 systemd[1]: libpod-conmon-b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9.scope: Deactivated successfully.
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.628 239853 DEBUG nova.virt.libvirt.vif [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T18:00:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1917551112',display_name='tempest-SnapshotDataIntegrityTests-server-1917551112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1917551112',id=29,image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcTSzO5/8KojM1MLUWkMK6qy2V5C4TV9O40HgYTNurKXbRFAxZyQQsb6UT9A+x9JmkPDulSDIxxh2hVKzYhHYd9VcbaUH4uFix/tlL5lTqqzCf4k5lqJSGlll+jKCctdw==',key_name='tempest-keypair-1269425205',keypairs=<?>,launch_index=0,launched_at=2026-02-02T18:00:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='896604c79c574097a167451efa4ee5b2',ramdisk_id='',reservation_id='r-pwwxyol5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88ad7b87-724c-4a9f-a946-6c9736783609',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-1188948708',owner_user_name='tempest-SnapshotDataIntegrityTests-1188948708-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T18:00:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9d3a264efbe443c860b536305fa7e8a',uuid=3ba4448b-74c6-491d-bbbe-a1f5e2e9852e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.628 239853 DEBUG nova.network.os_vif_util [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converting VIF {"id": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "address": "fa:16:3e:30:d8:b7", "network": {"id": "967fc097-5eb9-45d1-9d27-cd16a27cb74e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1451982786-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "896604c79c574097a167451efa4ee5b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75586f61-07", "ovs_interfaceid": "75586f61-07ff-4cd0-9aa1-9845359a1fe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.629 239853 DEBUG nova.network.os_vif_util [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.629 239853 DEBUG os_vif [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.631 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.631 239853 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75586f61-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.632 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.634 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.637 239853 INFO os_vif [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:d8:b7,bridge_name='br-int',has_traffic_filtering=True,id=75586f61-07ff-4cd0-9aa1-9845359a1fe6,network=Network(967fc097-5eb9-45d1-9d27-cd16a27cb74e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75586f61-07')#033[00m
Feb  2 13:01:55 np0005605476 podman[273115]: 2026-02-02 18:01:55.676653859 +0000 UTC m=+0.041851626 container remove b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.681 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[26888b37-baff-4b17-96be-8ee0e8b529c8]: (4, ('Mon Feb  2 06:01:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e (b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9)\nb630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9\nMon Feb  2 06:01:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e (b630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9)\nb630d3dc37514c6fd297f48ce077a9ea5fa195391bfacc2148e51ccdb707daa9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.683 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[308ee33d-e2ca-4073-b276-de998e54b9bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.685 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap967fc097-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.687 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 kernel: tap967fc097-50: left promiscuous mode
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.691 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.695 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[c044da62-b0d3-457a-8e5c-2a69cba5fa2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:01:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 351 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 5.9 KiB/s wr, 130 op/s
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.714 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[35374035-05c7-4f75-9946-f8696eed4856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.717 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[284bdf2a-8431-49eb-ae1d-a715b35b92dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.728 246686 DEBUG oslo.privsep.daemon [-] privsep: reply[190386a3-50b9-4917-8b7f-3099700009b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449013, 'reachable_time': 34074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273146, 'error': None, 'target': 'ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.731 155891 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-967fc097-5eb9-45d1-9d27-cd16a27cb74e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 13:01:55 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:55.732 155891 DEBUG oslo.privsep.daemon [-] privsep: reply[56e24df7-d52d-4356-b011-10cf3815bfe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 13:01:55 np0005605476 systemd[1]: run-netns-ovnmeta\x2d967fc097\x2d5eb9\x2d45d1\x2d9d27\x2dcd16a27cb74e.mount: Deactivated successfully.
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.863 239853 INFO nova.virt.libvirt.driver [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Deleting instance files /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_del#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.864 239853 INFO nova.virt.libvirt.driver [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Deletion of /var/lib/nova/instances/3ba4448b-74c6-491d-bbbe-a1f5e2e9852e_del complete#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.909 239853 INFO nova.compute.manager [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.910 239853 DEBUG oslo.service.loopingcall [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.910 239853 DEBUG nova.compute.manager [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 13:01:55 np0005605476 nova_compute[239846]: 2026-02-02 18:01:55.911 239853 DEBUG nova.network.neutron [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.004 239853 DEBUG nova.compute.manager [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-unplugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.005 239853 DEBUG oslo_concurrency.lockutils [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.005 239853 DEBUG oslo_concurrency.lockutils [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.006 239853 DEBUG oslo_concurrency.lockutils [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.006 239853 DEBUG nova.compute.manager [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] No waiting events found dispatching network-vif-unplugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.006 239853 DEBUG nova.compute.manager [req-7ba48167-d490-44e6-8b32-31943a87094a req-d387123a-371e-47ef-94f3-25540aad30ef e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-unplugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 13:01:56 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:56.221 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.222 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:01:56 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:01:56.222 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.659 239853 DEBUG nova.network.neutron [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.677 239853 INFO nova.compute.manager [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Took 0.77 seconds to deallocate network for instance.#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.714 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.714 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:56 np0005605476 nova_compute[239846]: 2026-02-02 18:01:56.777 239853 DEBUG oslo_concurrency.processutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:01:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:01:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138881220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.286 239853 DEBUG oslo_concurrency.processutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.291 239853 DEBUG nova.compute.provider_tree [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.314 239853 DEBUG nova.scheduler.client.report [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.334 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.365 239853 INFO nova.scheduler.client.report [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Deleted allocations for instance 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e#033[00m
Feb  2 13:01:57 np0005605476 nova_compute[239846]: 2026-02-02 18:01:57.432 239853 DEBUG oslo_concurrency.lockutils [None req-7de24e30-c96f-4a26-b7f0-852dc39b2f58 b9d3a264efbe443c860b536305fa7e8a 896604c79c574097a167451efa4ee5b2 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 351 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 4.6 KiB/s wr, 101 op/s
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.073 239853 DEBUG nova.compute.manager [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.074 239853 DEBUG oslo_concurrency.lockutils [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Acquiring lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.074 239853 DEBUG oslo_concurrency.lockutils [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.074 239853 DEBUG oslo_concurrency.lockutils [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] Lock "3ba4448b-74c6-491d-bbbe-a1f5e2e9852e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.075 239853 DEBUG nova.compute.manager [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] No waiting events found dispatching network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.075 239853 WARNING nova.compute.manager [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received unexpected event network-vif-plugged-75586f61-07ff-4cd0-9aa1-9845359a1fe6 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 13:01:58 np0005605476 nova_compute[239846]: 2026-02-02 18:01:58.075 239853 DEBUG nova.compute.manager [req-173ce588-b8a9-4afe-b370-635163e8df6d req-0736f727-93e1-44e5-aa21-871ab1d10b9b e09d5a440ad44db79012331e38b5457e 6c5f862d636844ad8564cec6268b9aa8 - - default default] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Received event network-vif-deleted-75586f61-07ff-4cd0-9aa1-9845359a1fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 13:01:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 303 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 59 op/s
Feb  2 13:02:00 np0005605476 nova_compute[239846]: 2026-02-02 18:02:00.566 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:00 np0005605476 nova_compute[239846]: 2026-02-02 18:02:00.633 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Feb  2 13:02:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Feb  2 13:02:00 np0005605476 ceph-mon[75197]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Feb  2 13:02:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Feb  2 13:02:02 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:02:02.224 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:02:02 np0005605476 nova_compute[239846]: 2026-02-02 18:02:02.368 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:02 np0005605476 nova_compute[239846]: 2026-02-02 18:02:02.376 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:02 np0005605476 podman[273173]: 2026-02-02 18:02:02.596848316 +0000 UTC m=+0.042619328 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 13:02:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Feb  2 13:02:04 np0005605476 podman[273193]: 2026-02-02 18:02:04.615694504 +0000 UTC m=+0.069293572 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 13:02:05 np0005605476 nova_compute[239846]: 2026-02-02 18:02:05.606 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:05 np0005605476 nova_compute[239846]: 2026-02-02 18:02:05.634 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Feb  2 13:02:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 18 op/s
Feb  2 13:02:10 np0005605476 nova_compute[239846]: 2026-02-02 18:02:10.608 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:10 np0005605476 nova_compute[239846]: 2026-02-02 18:02:10.613 239853 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770055315.612238, 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 13:02:10 np0005605476 nova_compute[239846]: 2026-02-02 18:02:10.613 239853 INFO nova.compute.manager [-] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] VM Stopped (Lifecycle Event)#033[00m
Feb  2 13:02:10 np0005605476 nova_compute[239846]: 2026-02-02 18:02:10.633 239853 DEBUG nova.compute.manager [None req-9f1d6cf6-fd99-4d66-bf59-e9f7dba81936 - - - - - -] [instance: 3ba4448b-74c6-491d-bbbe-a1f5e2e9852e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 13:02:10 np0005605476 nova_compute[239846]: 2026-02-02 18:02:10.635 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.663 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.664 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.665 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5028 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.665 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.665 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:02:15 np0005605476 nova_compute[239846]: 2026-02-02 18:02:15.666 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:17 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:02:17 np0005605476 podman[273365]: 2026-02-02 18:02:17.949286785 +0000 UTC m=+0.037427080 container create 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 13:02:17 np0005605476 systemd[1]: Started libpod-conmon-58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb.scope.
Feb  2 13:02:18 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:17.93248499 +0000 UTC m=+0.020625285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:18.030796052 +0000 UTC m=+0.118936347 container init 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:18.040191398 +0000 UTC m=+0.128331693 container start 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:18.04378172 +0000 UTC m=+0.131922015 container attach 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 13:02:18 np0005605476 tender_keller[273381]: 167 167
Feb  2 13:02:18 np0005605476 systemd[1]: libpod-58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb.scope: Deactivated successfully.
Feb  2 13:02:18 np0005605476 conmon[273381]: conmon 58457fafed1f579b9455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb.scope/container/memory.events
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:18.050025586 +0000 UTC m=+0.138165881 container died 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:02:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3ad816c8f72c29845accf2048cea20f1719e77433ee37d97eab9bb99b4e5e0dd-merged.mount: Deactivated successfully.
Feb  2 13:02:18 np0005605476 podman[273365]: 2026-02-02 18:02:18.092638052 +0000 UTC m=+0.180778357 container remove 58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb  2 13:02:18 np0005605476 systemd[1]: libpod-conmon-58457fafed1f579b9455b08c996fca16061675958c3db68b76aa3ffa7d5b6bcb.scope: Deactivated successfully.
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.21480523 +0000 UTC m=+0.044039067 container create 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:18 np0005605476 systemd[1]: Started libpod-conmon-960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729.scope.
Feb  2 13:02:18 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:18 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.194829285 +0000 UTC m=+0.024063142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.306102084 +0000 UTC m=+0.135335941 container init 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.310826198 +0000 UTC m=+0.140060035 container start 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.323024743 +0000 UTC m=+0.152258610 container attach 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 13:02:18 np0005605476 musing_wu[273420]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:02:18 np0005605476 musing_wu[273420]: --> All data devices are unavailable
Feb  2 13:02:18 np0005605476 systemd[1]: libpod-960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729.scope: Deactivated successfully.
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.734095527 +0000 UTC m=+0.563329364 container died 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 13:02:18 np0005605476 systemd[1]: var-lib-containers-storage-overlay-59e3e412141e5350a63e19f701fbc02843cfc25d977dba89106c32dbab11d1ee-merged.mount: Deactivated successfully.
Feb  2 13:02:18 np0005605476 podman[273403]: 2026-02-02 18:02:18.834114868 +0000 UTC m=+0.663348705 container remove 960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wu, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:18 np0005605476 systemd[1]: libpod-conmon-960efdc7cefceba9062125897dc5a3fec1d422a117cc8bf6b6cd73c01d65b729.scope: Deactivated successfully.
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.295336692 +0000 UTC m=+0.088114815 container create b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.229611871 +0000 UTC m=+0.022390024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:19 np0005605476 systemd[1]: Started libpod-conmon-b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f.scope.
Feb  2 13:02:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.474605636 +0000 UTC m=+0.267383769 container init b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.48325721 +0000 UTC m=+0.276035323 container start b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:02:19 np0005605476 clever_swartz[273531]: 167 167
Feb  2 13:02:19 np0005605476 systemd[1]: libpod-b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f.scope: Deactivated successfully.
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.545007148 +0000 UTC m=+0.337785351 container attach b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.545591575 +0000 UTC m=+0.338369718 container died b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:02:19 np0005605476 systemd[1]: var-lib-containers-storage-overlay-9f55b3ee147cbd7e5873fadbf28e973827a2e2efeb8539227a0c9642430f5400-merged.mount: Deactivated successfully.
Feb  2 13:02:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:02:19 np0005605476 podman[273514]: 2026-02-02 18:02:19.754361293 +0000 UTC m=+0.547139406 container remove b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_swartz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:02:19 np0005605476 systemd[1]: libpod-conmon-b19bbf0943bfc7bb54b5b5f59e6e3d52af07d5ab2835097120cc2b2bc4ccd06f.scope: Deactivated successfully.
Feb  2 13:02:19 np0005605476 podman[273555]: 2026-02-02 18:02:19.899104009 +0000 UTC m=+0.044990514 container create 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 13:02:19 np0005605476 systemd[1]: Started libpod-conmon-61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d.scope.
Feb  2 13:02:19 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687318a99cccfc4d3398662ed516d07fb639b394270f0512c774be21f356ba77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687318a99cccfc4d3398662ed516d07fb639b394270f0512c774be21f356ba77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687318a99cccfc4d3398662ed516d07fb639b394270f0512c774be21f356ba77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:19 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687318a99cccfc4d3398662ed516d07fb639b394270f0512c774be21f356ba77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:19 np0005605476 podman[273555]: 2026-02-02 18:02:19.879850944 +0000 UTC m=+0.025737469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:19 np0005605476 podman[273555]: 2026-02-02 18:02:19.975761688 +0000 UTC m=+0.121648203 container init 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 13:02:19 np0005605476 podman[273555]: 2026-02-02 18:02:19.984068584 +0000 UTC m=+0.129955089 container start 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 13:02:19 np0005605476 podman[273555]: 2026-02-02 18:02:19.987611904 +0000 UTC m=+0.133498419 container attach 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]: {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    "0": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "devices": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "/dev/loop3"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            ],
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_name": "ceph_lv0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_size": "21470642176",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "name": "ceph_lv0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "tags": {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_name": "ceph",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.crush_device_class": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.encrypted": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.objectstore": "bluestore",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_id": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.vdo": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.with_tpm": "0"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            },
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "vg_name": "ceph_vg0"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        }
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    ],
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    "1": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "devices": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "/dev/loop4"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            ],
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_name": "ceph_lv1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_size": "21470642176",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "name": "ceph_lv1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "tags": {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_name": "ceph",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.crush_device_class": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.encrypted": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.objectstore": "bluestore",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_id": "1",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.vdo": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.with_tpm": "0"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            },
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "vg_name": "ceph_vg1"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        }
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    ],
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    "2": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "devices": [
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "/dev/loop5"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            ],
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_name": "ceph_lv2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_size": "21470642176",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "name": "ceph_lv2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "tags": {
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.cluster_name": "ceph",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.crush_device_class": "",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.encrypted": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.objectstore": "bluestore",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osd_id": "2",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.vdo": "0",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:                "ceph.with_tpm": "0"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            },
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "type": "block",
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:            "vg_name": "ceph_vg2"
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:        }
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]:    ]
Feb  2 13:02:20 np0005605476 agitated_feistel[273572]: }
Feb  2 13:02:20 np0005605476 systemd[1]: libpod-61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d.scope: Deactivated successfully.
Feb  2 13:02:20 np0005605476 podman[273555]: 2026-02-02 18:02:20.28602264 +0000 UTC m=+0.431909145 container died 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-687318a99cccfc4d3398662ed516d07fb639b394270f0512c774be21f356ba77-merged.mount: Deactivated successfully.
Feb  2 13:02:20 np0005605476 podman[273555]: 2026-02-02 18:02:20.324532659 +0000 UTC m=+0.470419164 container remove 61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:02:20 np0005605476 systemd[1]: libpod-conmon-61c6d50e0a785aeba85a40a66a9118a80ca5deb62ee47dc0d2e4cd533644ee8d.scope: Deactivated successfully.
Feb  2 13:02:20 np0005605476 nova_compute[239846]: 2026-02-02 18:02:20.666 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.741769828 +0000 UTC m=+0.036873525 container create 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:20 np0005605476 systemd[1]: Started libpod-conmon-0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a.scope.
Feb  2 13:02:20 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.815485084 +0000 UTC m=+0.110588871 container init 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.722668108 +0000 UTC m=+0.017771845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.821308839 +0000 UTC m=+0.116412536 container start 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 13:02:20 np0005605476 stupefied_easley[273671]: 167 167
Feb  2 13:02:20 np0005605476 systemd[1]: libpod-0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a.scope: Deactivated successfully.
Feb  2 13:02:20 np0005605476 conmon[273671]: conmon 0987fbc89ea035d90566 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a.scope/container/memory.events
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.828807132 +0000 UTC m=+0.123910859 container attach 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.829497401 +0000 UTC m=+0.124601098 container died 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:20 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c6bd4c9bb2cb7eda66eb693a238be20eb99984279994cfd6959240615dbb2505-merged.mount: Deactivated successfully.
Feb  2 13:02:20 np0005605476 podman[273655]: 2026-02-02 18:02:20.869141293 +0000 UTC m=+0.164244990 container remove 0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 13:02:20 np0005605476 systemd[1]: libpod-conmon-0987fbc89ea035d90566cf561ac12a80e4ed94ea0b12f4ddc059c361470f625a.scope: Deactivated successfully.
Feb  2 13:02:20 np0005605476 podman[273693]: 2026-02-02 18:02:20.979170007 +0000 UTC m=+0.033477958 container create ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:02:21 np0005605476 systemd[1]: Started libpod-conmon-ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885.scope.
Feb  2 13:02:21 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:02:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94af30d96be52f172988deb971a620e1e1c47d3c07ddbc786c3a9323857c46a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94af30d96be52f172988deb971a620e1e1c47d3c07ddbc786c3a9323857c46a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94af30d96be52f172988deb971a620e1e1c47d3c07ddbc786c3a9323857c46a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:21 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94af30d96be52f172988deb971a620e1e1c47d3c07ddbc786c3a9323857c46a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:02:21 np0005605476 podman[273693]: 2026-02-02 18:02:21.054634653 +0000 UTC m=+0.108942604 container init ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:02:21 np0005605476 podman[273693]: 2026-02-02 18:02:20.962915587 +0000 UTC m=+0.017223538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:02:21 np0005605476 podman[273693]: 2026-02-02 18:02:21.065388637 +0000 UTC m=+0.119696568 container start ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 13:02:21 np0005605476 podman[273693]: 2026-02-02 18:02:21.06938326 +0000 UTC m=+0.123691281 container attach ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb  2 13:02:21 np0005605476 lvm[273788]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:02:21 np0005605476 lvm[273788]: VG ceph_vg0 finished
Feb  2 13:02:21 np0005605476 lvm[273789]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:02:21 np0005605476 lvm[273789]: VG ceph_vg1 finished
Feb  2 13:02:21 np0005605476 lvm[273791]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:02:21 np0005605476 lvm[273791]: VG ceph_vg2 finished
Feb  2 13:02:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:02:21 np0005605476 zealous_antonelli[273710]: {}
Feb  2 13:02:21 np0005605476 systemd[1]: libpod-ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885.scope: Deactivated successfully.
Feb  2 13:02:21 np0005605476 systemd[1]: libpod-ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885.scope: Consumed 1.103s CPU time.
Feb  2 13:02:21 np0005605476 podman[273794]: 2026-02-02 18:02:21.875475605 +0000 UTC m=+0.021693075 container died ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:02:21 np0005605476 systemd[1]: var-lib-containers-storage-overlay-94af30d96be52f172988deb971a620e1e1c47d3c07ddbc786c3a9323857c46a7-merged.mount: Deactivated successfully.
Feb  2 13:02:21 np0005605476 podman[273794]: 2026-02-02 18:02:21.911112083 +0000 UTC m=+0.057329513 container remove ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_antonelli, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 13:02:21 np0005605476 systemd[1]: libpod-conmon-ffd460c9907259419189686f4ec6ee906382e4bb64f4fd84444ad3b1a8a9d885.scope: Deactivated successfully.
Feb  2 13:02:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:02:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:02:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:22 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:02:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:02:25 np0005605476 nova_compute[239846]: 2026-02-02 18:02:25.668 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:02:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:02:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:02:30 np0005605476 nova_compute[239846]: 2026-02-02 18:02:30.668 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:30 np0005605476 nova_compute[239846]: 2026-02-02 18:02:30.670 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:33 np0005605476 podman[273835]: 2026-02-02 18:02:33.08477988 +0000 UTC m=+0.039241812 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb  2 13:02:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:35 np0005605476 podman[273855]: 2026-02-02 18:02:35.644212286 +0000 UTC m=+0.097143100 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:02:35 np0005605476 nova_compute[239846]: 2026-02-02 18:02:35.669 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:35 np0005605476 ovn_controller[146041]: 2026-02-02T18:02:35Z|00280|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb  2 13:02:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:02:36
Feb  2 13:02:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:02:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:02:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Feb  2 13:02:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:02:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:02:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:40 np0005605476 nova_compute[239846]: 2026-02-02 18:02:40.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:40 np0005605476 nova_compute[239846]: 2026-02-02 18:02:40.672 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:02:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:44 np0005605476 nova_compute[239846]: 2026-02-02 18:02:44.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.277 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.278 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.278 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.278 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.278 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.674 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.676 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.676 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.677 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.708 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.709 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:02:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:02:45 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364950958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.817 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.971 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.973 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.973 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:02:45 np0005605476 nova_compute[239846]: 2026-02-02 18:02:45.973 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.235 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.236 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.253 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:02:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:02:46.654 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:02:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:02:46.655 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:02:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:02:46.655 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:02:46 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:02:46 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638538560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.735 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.741 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.759 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.793 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:02:46 np0005605476 nova_compute[239846]: 2026-02-02 18:02:46.793 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:02:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:49 np0005605476 nova_compute[239846]: 2026-02-02 18:02:49.794 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:49 np0005605476 nova_compute[239846]: 2026-02-02 18:02:49.794 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:02:49 np0005605476 nova_compute[239846]: 2026-02-02 18:02:49.794 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:02:49 np0005605476 nova_compute[239846]: 2026-02-02 18:02:49.807 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:02:49 np0005605476 nova_compute[239846]: 2026-02-02 18:02:49.808 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:50 np0005605476 nova_compute[239846]: 2026-02-02 18:02:50.709 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:02:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:52 np0005605476 nova_compute[239846]: 2026-02-02 18:02:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:52 np0005605476 nova_compute[239846]: 2026-02-02 18:02:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:53 np0005605476 nova_compute[239846]: 2026-02-02 18:02:53.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:53 np0005605476 nova_compute[239846]: 2026-02-02 18:02:53.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:02:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:55 np0005605476 nova_compute[239846]: 2026-02-02 18:02:55.711 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.716409) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375716444, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1088, "num_deletes": 251, "total_data_size": 1524716, "memory_usage": 1548752, "flush_reason": "Manual Compaction"}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375721356, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 950399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34168, "largest_seqno": 35255, "table_properties": {"data_size": 946099, "index_size": 1824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11286, "raw_average_key_size": 20, "raw_value_size": 936815, "raw_average_value_size": 1738, "num_data_blocks": 82, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055279, "oldest_key_time": 1770055279, "file_creation_time": 1770055375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4969 microseconds, and 1963 cpu microseconds.
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.721383) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 950399 bytes OK
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.721394) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.722742) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.722751) EVENT_LOG_v1 {"time_micros": 1770055375722748, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.722765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1519645, prev total WAL file size 1519645, number of live WAL files 2.
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.723205) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(928KB)], [68(11MB)]
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375723254, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13039855, "oldest_snapshot_seqno": -1}
Feb  2 13:02:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6834 keys, 10254243 bytes, temperature: kUnknown
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375763135, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10254243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10204201, "index_size": 31882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 171102, "raw_average_key_size": 25, "raw_value_size": 10077271, "raw_average_value_size": 1474, "num_data_blocks": 1276, "num_entries": 6834, "num_filter_entries": 6834, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.763381) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10254243 bytes
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.764529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 326.4 rd, 256.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(24.5) write-amplify(10.8) OK, records in: 7310, records dropped: 476 output_compression: NoCompression
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.764544) EVENT_LOG_v1 {"time_micros": 1770055375764536, "job": 38, "event": "compaction_finished", "compaction_time_micros": 39954, "compaction_time_cpu_micros": 18211, "output_level": 6, "num_output_files": 1, "total_output_size": 10254243, "num_input_records": 7310, "num_output_records": 6834, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375764707, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055375765599, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.723118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.765667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.765672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.765674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.765675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:55 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:02:55.765677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:02:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:02:58 np0005605476 systemd-logind[799]: New session 51 of user zuul.
Feb  2 13:02:58 np0005605476 systemd[1]: Started Session 51 of User zuul.
Feb  2 13:02:59 np0005605476 nova_compute[239846]: 2026-02-02 18:02:59.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:02:59 np0005605476 nova_compute[239846]: 2026-02-02 18:02:59.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 13:02:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:00 np0005605476 nova_compute[239846]: 2026-02-02 18:03:00.712 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:03:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:03:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:01 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19076 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:02 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19078 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:02 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 13:03:02 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699875976' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  2 13:03:03 np0005605476 nova_compute[239846]: 2026-02-02 18:03:03.281 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:03:03 np0005605476 nova_compute[239846]: 2026-02-02 18:03:03.281 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 13:03:03 np0005605476 nova_compute[239846]: 2026-02-02 18:03:03.307 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 13:03:03 np0005605476 podman[274182]: 2026-02-02 18:03:03.549860163 +0000 UTC m=+0.060134073 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 13:03:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:03:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292333520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:03:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:03:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292333520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:03:05 np0005605476 nova_compute[239846]: 2026-02-02 18:03:05.714 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:03:05 np0005605476 nova_compute[239846]: 2026-02-02 18:03:05.715 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:03:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:03:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:06 np0005605476 podman[274235]: 2026-02-02 18:03:06.634140785 +0000 UTC m=+0.085858341 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb  2 13:03:07 np0005605476 nova_compute[239846]: 2026-02-02 18:03:07.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:03:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:09 np0005605476 ovs-vsctl[274309]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  2 13:03:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:10 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  2 13:03:10 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  2 13:03:10 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 13:03:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:03:10 np0005605476 nova_compute[239846]: 2026-02-02 18:03:10.715 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:03:10 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: cache status {prefix=cache status} (starting...)
Feb  2 13:03:10 np0005605476 lvm[274633]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:03:10 np0005605476 lvm[274633]: VG ceph_vg2 finished
Feb  2 13:03:10 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: client ls {prefix=client ls} (starting...)
Feb  2 13:03:11 np0005605476 lvm[274662]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:03:11 np0005605476 lvm[274662]: VG ceph_vg1 finished
Feb  2 13:03:11 np0005605476 lvm[274668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:03:11 np0005605476 lvm[274668]: VG ceph_vg0 finished
Feb  2 13:03:11 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19086 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:11 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: damage ls {prefix=damage ls} (starting...)
Feb  2 13:03:11 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump loads {prefix=dump loads} (starting...)
Feb  2 13:03:11 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19088 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:11 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  2 13:03:11 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005378974' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb  2 13:03:12 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19092 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:12 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292978981' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:03:12 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19096 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:12 np0005605476 ceph-mgr[75493]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:03:12 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: 2026-02-02T18:03:12.707+0000 7f7c633f1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:03:12 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: ops {prefix=ops} (starting...)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb  2 13:03:12 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512816342' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775578682' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb  2 13:03:13 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: session ls {prefix=session ls} (starting...)
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241389458' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb  2 13:03:13 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: status {prefix=status} (starting...)
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 13:03:13 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313480000' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 13:03:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:14 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19106 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721320318' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 13:03:14 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19110 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816115555' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 13:03:14 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459505892' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633393270' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850123975' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:03:15 np0005605476 nova_compute[239846]: 2026-02-02 18:03:15.718 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:03:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 13:03:15 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117813515' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  2 13:03:16 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19122 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:16 np0005605476 ceph-mgr[75493]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 13:03:16 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: 2026-02-02T18:03:16.013+0000 7f7c633f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 13:03:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 13:03:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397266411' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 13:03:16 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  2 13:03:16 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094529072' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb  2 13:03:16 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19128 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1646592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5660 writes, 24K keys, 5660 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5660 writes, 917 syncs, 6.17 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561085432430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.914367676s of 299.953704834s, submitted: 24
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917468 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcec8000/0x0/0x4ffc00000, data 0xb0a67/0x164000, compress 0x0/0x0/0x0, omap 0xfbca, meta 0x2bc0436), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.143590927s of 26.433855057s, submitted: 90
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920962 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1081344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 17702912 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 119 ms_handle_reset con 0x561086235400 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 17809408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fbebd000/0x0/0x4ffc00000, data 0x10b4203/0x116b000, compress 0x0/0x0/0x0, omap 0x101ac, meta 0x2bbfe54), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 119 heartbeat osd_stat(store_statfs(0x4fbebc000/0x0/0x4ffc00000, data 0x10b5dbb/0x116e000, compress 0x0/0x0/0x0, omap 0x10437, meta 0x2bbfbc9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17334272 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 120 ms_handle_reset con 0x5610896d5c00 session 0x561086d48c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17227776 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084734 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 17186816 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fb246000/0x0/0x4ffc00000, data 0x1d27996/0x1de2000, compress 0x0/0x0/0x0, omap 0x10ac1, meta 0x2bbf53f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 17178624 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 17178624 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 17178624 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fb246000/0x0/0x4ffc00000, data 0x1d27996/0x1de2000, compress 0x0/0x0/0x0, omap 0x10ac1, meta 0x2bbf53f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 17178624 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084734 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 17170432 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 17154048 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.592693329s of 11.773922920s, submitted: 48
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17268736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fb245000/0x0/0x4ffc00000, data 0x1d29415/0x1de5000, compress 0x0/0x0/0x0, omap 0x10d94, meta 0x2bbf26c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17268736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17268736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086740 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17268736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17260544 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17260544 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fb245000/0x0/0x4ffc00000, data 0x1d29415/0x1de5000, compress 0x0/0x0/0x0, omap 0x10d94, meta 0x2bbf26c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17260544 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fb245000/0x0/0x4ffc00000, data 0x1d29415/0x1de5000, compress 0x0/0x0/0x0, omap 0x10d94, meta 0x2bbf26c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17260544 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086740 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fb245000/0x0/0x4ffc00000, data 0x1d29415/0x1de5000, compress 0x0/0x0/0x0, omap 0x10d94, meta 0x2bbf26c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17260544 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fb245000/0x0/0x4ffc00000, data 0x1d29415/0x1de5000, compress 0x0/0x0/0x0, omap 0x10d94, meta 0x2bbf26c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17252352 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 121 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.994503975s of 11.001032829s, submitted: 25
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 17113088 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 123 ms_handle_reset con 0x5610896d5800 session 0x561086d49500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 17113088 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fb23d000/0x0/0x4ffc00000, data 0x1d2cba1/0x1deb000, compress 0x0/0x0/0x0, omap 0x112aa, meta 0x2bbed56), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092752 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 17113088 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 ms_handle_reset con 0x5610896d5400 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16973824 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fb23b000/0x0/0x4ffc00000, data 0x1d2e769/0x1def000, compress 0x0/0x0/0x0, omap 0x11535, meta 0x2bbeacb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16973824 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 ms_handle_reset con 0x5610896d5000 session 0x5610897ec700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 ms_handle_reset con 0x561086235400 session 0x5610897ece00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096066 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 ms_handle_reset con 0x5610896d5800 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fb23b000/0x0/0x4ffc00000, data 0x1d2e759/0x1dee000, compress 0x0/0x0/0x0, omap 0x11535, meta 0x2bbeacb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb234000/0x0/0x4ffc00000, data 0x1d31de4/0x1df4000, compress 0x0/0x0/0x0, omap 0x11a95, meta 0x2bbe56b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 ms_handle_reset con 0x5610896d5c00 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 13:03:17 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.287316322s of 12.351890564s, submitted: 36
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 ms_handle_reset con 0x5610896d4c00 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102780 data_alloc: 218103808 data_used: 9835
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 ms_handle_reset con 0x5610896d4800 session 0x561087b14000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 ms_handle_reset con 0x561086235400 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb236000/0x0/0x4ffc00000, data 0x1d31e56/0x1df6000, compress 0x0/0x0/0x0, omap 0x11a95, meta 0x2bbe56b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 127 ms_handle_reset con 0x5610896d4c00 session 0x5610897ed180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 127 ms_handle_reset con 0x5610896d5800 session 0x561088d00a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108509 data_alloc: 218103808 data_used: 9933
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 128 ms_handle_reset con 0x5610896d5c00 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 15712256 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb228000/0x0/0x4ffc00000, data 0x1d371fc/0x1e00000, compress 0x0/0x0/0x0, omap 0x122bc, meta 0x2bbdd44), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 ms_handle_reset con 0x5610893bc400 session 0x561087b9d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 15597568 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 ms_handle_reset con 0x5610893bc400 session 0x561088c90e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 ms_handle_reset con 0x561086235400 session 0x561087b9cfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb228000/0x0/0x4ffc00000, data 0x1d371fc/0x1e00000, compress 0x0/0x0/0x0, omap 0x122bc, meta 0x2bbdd44), peers [0,1] op hist [4])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 130 ms_handle_reset con 0x5610896d5000 session 0x561088e21a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 15589376 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 131 ms_handle_reset con 0x5610896d5400 session 0x561089492380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 15474688 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119645 data_alloc: 218103808 data_used: 10534
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 15474688 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.791577339s of 10.911414146s, submitted: 52
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 131 ms_handle_reset con 0x5610896d5c00 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 131 ms_handle_reset con 0x561086235400 session 0x561088d001c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 131 ms_handle_reset con 0x5610893bc400 session 0x561088c91880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 15441920 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 132 ms_handle_reset con 0x5610896d4c00 session 0x5610897ed340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 15425536 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb226000/0x0/0x4ffc00000, data 0x1d3a97a/0x1e04000, compress 0x0/0x0/0x0, omap 0x127d2, meta 0x2bbd82e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 133 ms_handle_reset con 0x5610896d5800 session 0x561086e9ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 15409152 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb21e000/0x0/0x4ffc00000, data 0x1d3e142/0x1e0a000, compress 0x0/0x0/0x0, omap 0x12ce8, meta 0x2bbd318), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 15409152 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127663 data_alloc: 218103808 data_used: 11646
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 15400960 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 134 ms_handle_reset con 0x5610896d5c00 session 0x561086cdf880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 134 handle_osd_map epochs [135,136], i have 134, src has [1,136]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 15351808 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 136 ms_handle_reset con 0x561086235400 session 0x561088d01c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 15343616 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb215000/0x0/0x4ffc00000, data 0x1d4351a/0x1e13000, compress 0x0/0x0/0x0, omap 0x132da, meta 0x2bbcd26), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 ms_handle_reset con 0x5610893bc400 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 15515648 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb213000/0x0/0x4ffc00000, data 0x1d4516c/0x1e17000, compress 0x0/0x0/0x0, omap 0x13565, meta 0x2bbca9b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 ms_handle_reset con 0x5610896d4c00 session 0x561089493a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 ms_handle_reset con 0x5610896d5800 session 0x561086e9a380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 15360000 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 ms_handle_reset con 0x5610896d5400 session 0x561089493500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136055 data_alloc: 218103808 data_used: 11646
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 15360000 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 handle_osd_map epochs [139,139], i have 137, src has [1,139]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 137 handle_osd_map epochs [138,139], i have 137, src has [1,139]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.194019318s of 10.317667961s, submitted: 55
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 139 ms_handle_reset con 0x561086235400 session 0x561087bac8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 140 ms_handle_reset con 0x5610893bc400 session 0x56108944d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 14508032 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 140 ms_handle_reset con 0x5610896d5800 session 0x561087badc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 ms_handle_reset con 0x5610896d4c00 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 ms_handle_reset con 0x5610896d5000 session 0x561086290540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 14352384 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 ms_handle_reset con 0x561086235400 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb204000/0x0/0x4ffc00000, data 0x1d4bf0b/0x1e21000, compress 0x0/0x0/0x0, omap 0x13d7e, meta 0x2bbc282), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 14352384 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 14286848 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb204000/0x0/0x4ffc00000, data 0x1d4bf0b/0x1e21000, compress 0x0/0x0/0x0, omap 0x13d7e, meta 0x2bbc282), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148025 data_alloc: 218103808 data_used: 12303
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 14286848 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 14286848 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 142 ms_handle_reset con 0x5610893bc400 session 0x561087b9c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 14401536 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 142 ms_handle_reset con 0x5610896d4c00 session 0x561088d008c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 14401536 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 14401536 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150193 data_alloc: 218103808 data_used: 12303
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fb205000/0x0/0x4ffc00000, data 0x1d4d9c6/0x1e25000, compress 0x0/0x0/0x0, omap 0x14081, meta 0x2bbbf7f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 142 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 14393344 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 143 ms_handle_reset con 0x5610896d5800 session 0x561086d49a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.332273483s of 10.397913933s, submitted: 58
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 ms_handle_reset con 0x561089704c00 session 0x561087b38000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb1fd000/0x0/0x4ffc00000, data 0x1d511d6/0x1e2b000, compress 0x0/0x0/0x0, omap 0x146a6, meta 0x2bbb95a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb1fd000/0x0/0x4ffc00000, data 0x1d511d6/0x1e2b000, compress 0x0/0x0/0x0, omap 0x146a6, meta 0x2bbb95a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 ms_handle_reset con 0x561086235400 session 0x561086cdea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 ms_handle_reset con 0x5610893bc400 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155569 data_alloc: 218103808 data_used: 12575
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb1fe000/0x0/0x4ffc00000, data 0x1d511c6/0x1e2a000, compress 0x0/0x0/0x0, omap 0x146a6, meta 0x2bbb95a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155569 data_alloc: 218103808 data_used: 12575
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.971953392s of 10.003309250s, submitted: 23
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 ms_handle_reset con 0x5610896d4c00 session 0x56108947e700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb1fc000/0x0/0x4ffc00000, data 0x1d52ca7/0x1e2e000, compress 0x0/0x0/0x0, omap 0x14b7c, meta 0x2bbb484), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 14237696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 ms_handle_reset con 0x5610896d5800 session 0x56108947e1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 14245888 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 ms_handle_reset con 0x561089704800 session 0x5610894928c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 14368768 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 14368768 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb1ff000/0x0/0x4ffc00000, data 0x1d52c45/0x1e2d000, compress 0x0/0x0/0x0, omap 0x14b42, meta 0x2bbb4be), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159085 data_alloc: 218103808 data_used: 12575
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 14368768 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 14368768 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 ms_handle_reset con 0x561086235400 session 0x561086cdefc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 ms_handle_reset con 0x561089704800 session 0x561087b38000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 14344192 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 145 handle_osd_map epochs [145,146], i have 146, src has [1,146]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 ms_handle_reset con 0x561089704400 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 14344192 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 14344192 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb1f8000/0x0/0x4ffc00000, data 0x1d54853/0x1e32000, compress 0x0/0x0/0x0, omap 0x14dcd, meta 0x2bbb233), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 ms_handle_reset con 0x561086d9f000 session 0x561087b9c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167644 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 ms_handle_reset con 0x5610896c8000 session 0x5610897ec8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 14344192 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 ms_handle_reset con 0x561086235400 session 0x56108947fdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 ms_handle_reset con 0x561086d9f000 session 0x56108944ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.964290619s of 10.023301125s, submitted: 42
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 146 handle_osd_map epochs [146,147], i have 147, src has [1,147]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 ms_handle_reset con 0x5610896c9c00 session 0x561088e20e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 14327808 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 ms_handle_reset con 0x5610896c8000 session 0x56108947ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 ms_handle_reset con 0x561089704800 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 ms_handle_reset con 0x561086235400 session 0x561088d016c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 ms_handle_reset con 0x561086d9f000 session 0x561087b9d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb1f4000/0x0/0x4ffc00000, data 0x1d56451/0x1e36000, compress 0x0/0x0/0x0, omap 0x15058, meta 0x2bbafa8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb1f7000/0x0/0x4ffc00000, data 0x1d563d1/0x1e33000, compress 0x0/0x0/0x0, omap 0x15058, meta 0x2bbafa8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168294 data_alloc: 218103808 data_used: 12575
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb1f7000/0x0/0x4ffc00000, data 0x1d563d1/0x1e33000, compress 0x0/0x0/0x0, omap 0x15058, meta 0x2bbafa8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168294 data_alloc: 218103808 data_used: 12575
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb1f7000/0x0/0x4ffc00000, data 0x1d563d1/0x1e33000, compress 0x0/0x0/0x0, omap 0x15058, meta 0x2bbafa8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 14303232 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.891380310s of 10.932271957s, submitted: 34
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 148 ms_handle_reset con 0x5610896c8000 session 0x56108944c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb1f4000/0x0/0x4ffc00000, data 0x1d57e50/0x1e36000, compress 0x0/0x0/0x0, omap 0x1530f, meta 0x2bbacf1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 13230080 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 13361152 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x561087bac8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 13230080 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561089704400 session 0x561088d00a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x561086290540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181557 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 13197312 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c8000 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x56108947f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x56108695b400 session 0x561088e20000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ee000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16192, meta 0x2bb9e6e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x561089493a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16192, meta 0x2bb9e6e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181697 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16192, meta 0x2bb9e6e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16192, meta 0x2bb9e6e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16192, meta 0x2bb9e6e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x561086e9b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.403162003s of 12.484668732s, submitted: 47
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c8000 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183231 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x1631a, meta 0x2bb9ce6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 13213696 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x561088e20fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896d4800 session 0x561086d49a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x1631a, meta 0x2bb9ce6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 12148736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x56108944dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 12148736 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x561088d00fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 11042816 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187506 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 11042816 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c8000 session 0x561088dff340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896d5400 session 0x561088779340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x561087b38e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x163ce, meta 0x2bb9c32), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 10829824 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188886 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c8000 session 0x561086cdfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.661437988s of 10.739489555s, submitted: 46
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x561089492380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ee000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x163ce, meta 0x2bb9c32), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ee000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x163ce, meta 0x2bb9c32), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187413 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896d4c00 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x561088e21880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x561088dff500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c8000 session 0x56108944d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896c9c00 session 0x561088e20700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1f0000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x1656c, meta 0x2bb9a94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x5610896d5800 session 0x561089492fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188141 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ad0/0x1e3d000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086235400 session 0x561089492000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.682782173s of 11.745578766s, submitted: 34
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 ms_handle_reset con 0x561086d9f000 session 0x561086e9a8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 10846208 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb1ef000/0x0/0x4ffc00000, data 0x1d59ac0/0x1e3c000, compress 0x0/0x0/0x0, omap 0x16494, meta 0x2bb9b6c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.644001007s of 15.645300865s, submitted: 1
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 150 ms_handle_reset con 0x5610896c9c00 session 0x56108944ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 10838016 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 151 ms_handle_reset con 0x5610896c8000 session 0x561087badc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 10829824 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 152 ms_handle_reset con 0x5610899ee800 session 0x561086e9ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204244 data_alloc: 218103808 data_used: 12673
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 10854400 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 153 ms_handle_reset con 0x5610899ee400 session 0x561086dfa540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 153 ms_handle_reset con 0x561086235400 session 0x561088e21500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1253376 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 153 ms_handle_reset con 0x561086d9f000 session 0x561087b39880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb1db000/0x0/0x4ffc00000, data 0x1d60a41/0x1e4b000, compress 0x0/0x0/0x0, omap 0x1745f, meta 0x2bb8ba1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 1236992 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 154 ms_handle_reset con 0x5610896c9c00 session 0x561088c91880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 96329728 unmapped: 1187840 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x5610896c8000 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x561086235400 session 0x561088dff6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x561086d9f000 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x5610896c9c00 session 0x561088d00000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x5610899ee400 session 0x561087b38380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x5610899eec00 session 0x561088c90e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x5610899ee800 session 0x561087b14c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 14598144 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 ms_handle_reset con 0x561086d9f000 session 0x561087b39180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305034 data_alloc: 234881024 data_used: 13644193
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 96600064 unmapped: 14565376 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 156 ms_handle_reset con 0x5610896c9c00 session 0x561087b9ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 156 ms_handle_reset con 0x5610899ee400 session 0x5610894936c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 156 ms_handle_reset con 0x561088d2fc00 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610893bc400 session 0x56108947e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610896c9c00 session 0x561086e9a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610899ee800 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x561086235400 session 0x56108944cc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 13795328 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610899ee400 session 0x561087b14000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x561086235400 session 0x5610897ec000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9e66000/0x0/0x4ffc00000, data 0x30d08e9/0x31c2000, compress 0x0/0x0/0x0, omap 0x180b5, meta 0x2bb7f4b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610896c9c00 session 0x56108944c8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610899ee800 session 0x561087b39340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 13770752 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 ms_handle_reset con 0x5610899ee400 session 0x561089492700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.815007210s of 10.470678329s, submitted: 144
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 ms_handle_reset con 0x5610893bc400 session 0x561086cdf6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 ms_handle_reset con 0x561086235400 session 0x56108947e8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 13729792 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 ms_handle_reset con 0x5610896c9c00 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9e66000/0x0/0x4ffc00000, data 0x30d2493/0x31c4000, compress 0x0/0x0/0x0, omap 0x182c2, meta 0x2bb7d3e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 ms_handle_reset con 0x5610899ee800 session 0x56108947efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 159 ms_handle_reset con 0x5610899ee400 session 0x561086ecf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 13541376 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383989 data_alloc: 234881024 data_used: 13645293
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 13541376 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12484608 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 heartbeat osd_stat(store_statfs(0x4f9e3e000/0x0/0x4ffc00000, data 0x30f806c/0x31ea000, compress 0x0/0x0/0x0, omap 0x185d3, meta 0x2bb7a2d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12484608 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 ms_handle_reset con 0x5610896c7000 session 0x561087b39c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 ms_handle_reset con 0x561086235400 session 0x561088d01500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 4587520 heap: 111165440 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 ms_handle_reset con 0x5610899ee800 session 0x561088e208c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 507904 heap: 113262592 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 ms_handle_reset con 0x5610896c6c00 session 0x561088c91880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 161 ms_handle_reset con 0x561089a47400 session 0x561087b38380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503303 data_alloc: 251658240 data_used: 28392025
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 974848 heap: 117456896 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 161 handle_osd_map epochs [161,162], i have 162, src has [1,162]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x5610896c9c00 session 0x561088e21c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x5610899ee400 session 0x561088dffc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x561089a47000 session 0x561086cdfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x5610896c7c00 session 0x561088d00700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9e31000/0x0/0x4ffc00000, data 0x30fd551/0x31f7000, compress 0x0/0x0/0x0, omap 0x18d94, meta 0x2bb726c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x5610896c9c00 session 0x561087b141c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 8003584 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x561089a46c00 session 0x561089492540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 ms_handle_reset con 0x561089a46c00 session 0x56108944d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 8249344 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.704252243s of 10.076365471s, submitted: 169
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 ms_handle_reset con 0x5610899ee400 session 0x561087bac1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa7fe000/0x0/0x4ffc00000, data 0x27340c8/0x282c000, compress 0x0/0x0/0x0, omap 0x196bf, meta 0x2bb6941), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 8224768 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 ms_handle_reset con 0x5610896c7800 session 0x561086ecefc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 ms_handle_reset con 0x5610896c7400 session 0x561088dffdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 ms_handle_reset con 0x561089a47000 session 0x561088e20a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 15589376 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 ms_handle_reset con 0x561089a46800 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288716 data_alloc: 234881024 data_used: 10962486
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 164 ms_handle_reset con 0x5610896c7400 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 15589376 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 15572992 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x5610896c7800 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fb1ba000/0x0/0x4ffc00000, data 0x1d75768/0x1e6e000, compress 0x0/0x0/0x0, omap 0x1a587, meta 0x2bb5a79), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 15572992 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 15572992 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x5610899ee400 session 0x5610897ecc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x561089a46c00 session 0x561086ecf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x5610896c7400 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x5610896c7800 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x5610899ee400 session 0x561088dff500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 ms_handle_reset con 0x561089a46800 session 0x561089492700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 31268864 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa820000/0x0/0x4ffc00000, data 0x2713768/0x280c000, compress 0x0/0x0/0x0, omap 0x1a587, meta 0x2bb5a79), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347656 data_alloc: 234881024 data_used: 10966531
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 31268864 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x561089a46400 session 0x561086eddc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x2715203/0x280f000, compress 0x0/0x0/0x0, omap 0x1aa83, meta 0x2bb557d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 31268864 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610896c7400 session 0x561087b9c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 31268864 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610896c7800 session 0x561087b141c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610899ee400 session 0x561088d4f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 31424512 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x561089a46400 session 0x561086dfb180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.516469955s of 11.141609192s, submitted: 116
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x561089a46800 session 0x56108944c000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610896c7400 session 0x561088e216c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102817792 unmapped: 31522816 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356117 data_alloc: 234881024 data_used: 10970627
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 29278208 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x2715265/0x2810000, compress 0x0/0x0/0x0, omap 0x1af03, meta 0x2bb50fd), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x561089a46400 session 0x561087b9d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x561089a46800 session 0x561087b38e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610896c7800 session 0x561088e201c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 ms_handle_reset con 0x5610899ee400 session 0x56108947fc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x5610896c7400 session 0x561087b39c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 31686656 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x5610896c7800 session 0x56108b73aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46400 session 0x56108b73a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102244352 unmapped: 32096256 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fb1b5000/0x0/0x4ffc00000, data 0x1d78ce4/0x1e75000, compress 0x0/0x0/0x0, omap 0x1b34f, meta 0x2bb4cb1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46800 session 0x561088e20fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102252544 unmapped: 32088064 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089707000 session 0x56108b73b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x5610896c7400 session 0x56108b73ae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 32071680 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x5610896c7800 session 0x56108bcfc1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308094 data_alloc: 234881024 data_used: 10966803
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46400 session 0x561086cbba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102318080 unmapped: 32022528 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089706c00 session 0x561088e21500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46800 session 0x561086cbb340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089706800 session 0x561087b38a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102318080 unmapped: 32022528 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 32006144 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x5610896c7800 session 0x561086dfaa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089706c00 session 0x561088d016c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f89b7000/0x0/0x4ffc00000, data 0x4578c92/0x4675000, compress 0x0/0x0/0x0, omap 0x1bd08, meta 0x2bb42f8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 101900288 unmapped: 32440320 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f89b7000/0x0/0x4ffc00000, data 0x4578c92/0x4675000, compress 0x0/0x0/0x0, omap 0x1bd08, meta 0x2bb42f8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 101957632 unmapped: 32382976 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.182906151s of 10.613587379s, submitted: 120
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f61b7000/0x0/0x4ffc00000, data 0x6d78c92/0x6e75000, compress 0x0/0x0/0x0, omap 0x1bd3a, meta 0x2bb42c6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1953297 data_alloc: 234881024 data_used: 10967075
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110714880 unmapped: 23625728 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 23568384 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 31514624 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103014400 unmapped: 31326208 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103325696 unmapped: 31014912 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4ea9b7000/0x0/0x4ffc00000, data 0x12578c92/0x12675000, compress 0x0/0x0/0x0, omap 0x1bdd0, meta 0x2bb4230), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2685641 data_alloc: 234881024 data_used: 10967075
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103333888 unmapped: 31006720 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 22519808 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 30801920 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 30687232 heap: 134340608 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4e71b7000/0x0/0x4ffc00000, data 0x15d78c92/0x15e75000, compress 0x0/0x0/0x0, omap 0x1bdd0, meta 0x2bb4230), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 103915520 unmapped: 38821888 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread fragmentation_score=0.000286 took=0.000046s
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.924742699s of 10.009058952s, submitted: 36
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 heartbeat osd_stat(store_statfs(0x4e49b7000/0x0/0x4ffc00000, data 0x18578c92/0x18675000, compress 0x0/0x0/0x0, omap 0x1bdd0, meta 0x2bb4230), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267985 data_alloc: 234881024 data_used: 10967075
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 104087552 unmapped: 38649856 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46400 session 0x561087bad6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 34611200 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 ms_handle_reset con 0x561089a46800 session 0x561088e20000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 34430976 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 167 handle_osd_map epochs [167,168], i have 168, src has [1,168]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 ms_handle_reset con 0x5610896c7400 session 0x561086cbaa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 34349056 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 heartbeat osd_stat(store_statfs(0x4de9b7000/0x0/0x4ffc00000, data 0x1e578c92/0x1e675000, compress 0x0/0x0/0x0, omap 0x1bdd0, meta 0x2bb4230), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 ms_handle_reset con 0x5610896c7800 session 0x561087b40540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 ms_handle_reset con 0x561089706800 session 0x561086d49880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 34340864 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 ms_handle_reset con 0x561089706c00 session 0x561086cbac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 heartbeat osd_stat(store_statfs(0x4de9b2000/0x0/0x4ffc00000, data 0x1e57a82e/0x1e678000, compress 0x0/0x0/0x0, omap 0x1c3ca, meta 0x2bb3c36), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688205 data_alloc: 234881024 data_used: 14702643
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 34340864 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 168 handle_osd_map epochs [168,169], i have 169, src has [1,169]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x561089a46400 session 0x56108bcfddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896c7400 session 0x56108b73afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896c7800 session 0x56108b73a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x561089706c00 session 0x56108b73b500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x561089706800 session 0x56108b73b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 35356672 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896d4800 session 0x56108947fa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896c7400 session 0x561087b40c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 35233792 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fac5b000/0x0/0x4ffc00000, data 0x22d0470/0x23cf000, compress 0x0/0x0/0x0, omap 0x1cd17, meta 0x2bb32e9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 35233792 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896c7800 session 0x5610897ec000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x5610896d4800 session 0x56108b73bdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 ms_handle_reset con 0x561089706c00 session 0x56108947ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 34922496 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418949 data_alloc: 234881024 data_used: 14702627
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 34922496 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.615560532s of 11.261826515s, submitted: 166
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x561089706800 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 34922496 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x5610896c7400 session 0x561088d01c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fadc4000/0x0/0x4ffc00000, data 0x2166028/0x2266000, compress 0x0/0x0/0x0, omap 0x1cfa9, meta 0x2bb3057), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x5610896c7800 session 0x56108944c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 34914304 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 34914304 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x5610896d4800 session 0x5610897ec380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x561089706c00 session 0x561087b9d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fadc4000/0x0/0x4ffc00000, data 0x2166028/0x2266000, compress 0x0/0x0/0x0, omap 0x1cfa9, meta 0x2bb3057), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 34938880 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fadc3000/0x0/0x4ffc00000, data 0x2166038/0x2267000, compress 0x0/0x0/0x0, omap 0x1cfa9, meta 0x2bb3057), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 ms_handle_reset con 0x56108bd30800 session 0x56108bcfca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426649 data_alloc: 234881024 data_used: 14702627
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 35143680 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 170 handle_osd_map epochs [170,171], i have 171, src has [1,171]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 171 ms_handle_reset con 0x5610896c7400 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 35143680 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 34078720 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 173 ms_handle_reset con 0x5610896d4800 session 0x561086291180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 173 ms_handle_reset con 0x561089706c00 session 0x561087b40700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 173 ms_handle_reset con 0x5610896c7800 session 0x561086cdee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 33030144 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 173 heartbeat osd_stat(store_statfs(0x4f9c19000/0x0/0x4ffc00000, data 0x216986c/0x226f000, compress 0x0/0x0/0x0, omap 0x1d506, meta 0x3d52afa), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 174 ms_handle_reset con 0x56108bd30c00 session 0x561087b15340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 174 ms_handle_reset con 0x56108bd30800 session 0x561088da2380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 174 ms_handle_reset con 0x5610896c7800 session 0x561088da2000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 174 ms_handle_reset con 0x5610896c7400 session 0x5610887788c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 33013760 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1442595 data_alloc: 234881024 data_used: 14703310
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 175 ms_handle_reset con 0x5610896d4800 session 0x561087b38700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 33013760 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 175 ms_handle_reset con 0x561089706c00 session 0x561087b14c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 175 heartbeat osd_stat(store_statfs(0x4f9c14000/0x0/0x4ffc00000, data 0x216ebe8/0x2278000, compress 0x0/0x0/0x0, omap 0x1e114, meta 0x3d51eec), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.278085709s of 11.657059669s, submitted: 76
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 175 ms_handle_reset con 0x5610896c7400 session 0x56108bcfc380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 176 ms_handle_reset con 0x5610896c7800 session 0x561088c91500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450980 data_alloc: 234881024 data_used: 14703408
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 176 handle_osd_map epochs [176,177], i have 177, src has [1,177]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x56108bd30800 session 0x561088e21c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x5610896d4800 session 0x56108b73ba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x56108bd31000 session 0x561087b41a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109477888 unmapped: 33259520 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x5610896c7400 session 0x561087bac000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x5610896c7800 session 0x56108bcfd340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x21722ff/0x2280000, compress 0x0/0x0/0x0, omap 0x1e845, meta 0x3d517bb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 33243136 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x56108bd30800 session 0x561087b39880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x56108bd30000 session 0x56108947f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 ms_handle_reset con 0x561086e9ec00 session 0x561086cba000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109805568 unmapped: 32931840 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 178 ms_handle_reset con 0x56108bd31400 session 0x561088f46fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 178 ms_handle_reset con 0x5610896d4800 session 0x561087b14000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 32907264 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 178 ms_handle_reset con 0x5610896c7400 session 0x561088da2e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457697 data_alloc: 234881024 data_used: 14703310
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 32907264 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 32907264 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 179 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x2173e9d/0x2282000, compress 0x0/0x0/0x0, omap 0x1ebe7, meta 0x3d51419), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 179 ms_handle_reset con 0x5610896c7800 session 0x561088da3500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462223 data_alloc: 234881024 data_used: 14703310
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.707204819s of 13.277346611s, submitted: 70
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x56108bd30000 session 0x561088da28c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896c7400 session 0x561087b15500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 heartbeat osd_stat(store_statfs(0x4f9c01000/0x0/0x4ffc00000, data 0x21773c7/0x2289000, compress 0x0/0x0/0x0, omap 0x1f1ae, meta 0x3d50e52), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896c7800 session 0x56108bcfce00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896d4800 session 0x561088f46c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x56108bd31400 session 0x561088f47180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 32899072 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x56108bd30800 session 0x561088f47340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896c7400 session 0x561088f47500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896c7800 session 0x561089493180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x5610896d4800 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x56108bd31400 session 0x561088e21880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 ms_handle_reset con 0x56108be12800 session 0x561088c91180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 31981568 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 31981568 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495738 data_alloc: 234881024 data_used: 14703310
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 31981568 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x5610896c7400 session 0x561086cba700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f97cc000/0x0/0x4ffc00000, data 0x25ad3b7/0x26be000, compress 0x0/0x0/0x0, omap 0x1f3ac, meta 0x3d50c54), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 32464896 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x5610896c7800 session 0x56108bcfc540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 32464896 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x5610896d4800 session 0x561088779340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108bd31400 session 0x561088d01880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be12c00 session 0x56108944d880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 32317440 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f97ca000/0x0/0x4ffc00000, data 0x25aef76/0x26c2000, compress 0x0/0x0/0x0, omap 0x1fe3d, meta 0x3d501c3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 32309248 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x5610896d4800 session 0x561088f47dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108bd31400 session 0x561086d49500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503784 data_alloc: 234881024 data_used: 14846231
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 32604160 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 31940608 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.627874374s of 11.779876709s, submitted: 83
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be13c00 session 0x561087b40e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 31940608 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f97c9000/0x0/0x4ffc00000, data 0x25aefd8/0x26c3000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 31907840 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be13800 session 0x561087b401c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be13400 session 0x561087b40380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 31776768 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x5610896d4800 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528173 data_alloc: 234881024 data_used: 18830103
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 31629312 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 31629312 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 31629312 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 31629312 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f97ca000/0x0/0x4ffc00000, data 0x25aef76/0x26c2000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108bd31400 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be13800 session 0x561086dfa540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 30654464 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610260 data_alloc: 234881024 data_used: 18814743
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 27746304 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 27148288 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 27148288 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f8a5f000/0x0/0x4ffc00000, data 0x3318fd8/0x342d000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.579559326s of 10.879899025s, submitted: 111
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f8a5f000/0x0/0x4ffc00000, data 0x3318fd8/0x342d000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be13c00 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 27140096 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 27140096 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x2b84f76/0x2c98000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572298 data_alloc: 234881024 data_used: 18880279
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 27140096 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 26853376 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 26853376 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 26853376 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be12400 session 0x561087b41dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 26714112 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f91d1000/0x0/0x4ffc00000, data 0x2ba6f86/0x2cbb000, compress 0x0/0x0/0x0, omap 0x1fd11, meta 0x3d502ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1574114 data_alloc: 234881024 data_used: 18884277
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 26714112 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 26714112 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108be12400 session 0x561087b40000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 26697728 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 ms_handle_reset con 0x56108bd31400 session 0x56108b8c6fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 181 handle_osd_map epochs [181,182], i have 182, src has [1,182]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.066512108s of 10.111187935s, submitted: 23
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 182 ms_handle_reset con 0x5610896d4800 session 0x56108b73b180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 26689536 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 26673152 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578683 data_alloc: 234881024 data_used: 18884277
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 182 ms_handle_reset con 0x56108be13800 session 0x561087bac1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be13c00 session 0x561088e20a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f91c9000/0x0/0x4ffc00000, data 0x2badb62/0x2cc3000, compress 0x0/0x0/0x0, omap 0x20253, meta 0x3d4fdad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 26656768 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108bd31400 session 0x56108b8c68c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be12400 session 0x561086cba8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be13800 session 0x561088e20700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be13c00 session 0x561087bad340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be12000 session 0x56108b73ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108bd31400 session 0x561088dff500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be12400 session 0x56108b8c6a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be13800 session 0x561088dffdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x56108be13c00 session 0x56108b8c7500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x5610896c7400 session 0x561088f47a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 ms_handle_reset con 0x5610896c7800 session 0x561088f46000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 184 ms_handle_reset con 0x5610896d4800 session 0x56108944c8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 25993216 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 184 ms_handle_reset con 0x56108bd31400 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 28254208 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f921d000/0x0/0x4ffc00000, data 0x2b56287/0x2c6e000, compress 0x0/0x0/0x0, omap 0x20bc2, meta 0x3d4f43e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 184 ms_handle_reset con 0x56108be12400 session 0x56108944da40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 185 ms_handle_reset con 0x56108be13800 session 0x561088d4e000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 28983296 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 185 ms_handle_reset con 0x5610896c7800 session 0x5610897eddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 28975104 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1561782 data_alloc: 234881024 data_used: 14713152
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 186 ms_handle_reset con 0x5610896d4800 session 0x561088e21340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 28966912 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 186 ms_handle_reset con 0x56108bd31400 session 0x5610897ec700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 186 ms_handle_reset con 0x56108be13c00 session 0x561088f46700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 ms_handle_reset con 0x56108be12400 session 0x5610897ed880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 28950528 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 ms_handle_reset con 0x56108be12400 session 0x561088f46a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 ms_handle_reset con 0x56108bd31400 session 0x561088da36c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 ms_handle_reset con 0x56108be13c00 session 0x561087b14a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 28917760 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9210000/0x0/0x4ffc00000, data 0x2b5b67a/0x2c78000, compress 0x0/0x0/0x0, omap 0x22276, meta 0x3d4dd8a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22175744 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22175744 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1630561 data_alloc: 234881024 data_used: 24832652
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.983505249s of 12.354722977s, submitted: 190
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 22118400 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9211000/0x0/0x4ffc00000, data 0x2b5b5d8/0x2c77000, compress 0x0/0x0/0x0, omap 0x2235f, meta 0x3d4dca1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9210000/0x0/0x4ffc00000, data 0x2b5d073/0x2c7a000, compress 0x0/0x0/0x0, omap 0x2263e, meta 0x3d4d9c2), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1632823 data_alloc: 234881024 data_used: 24836748
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f9210000/0x0/0x4ffc00000, data 0x2b5d073/0x2c7a000, compress 0x0/0x0/0x0, omap 0x2263e, meta 0x3d4d9c2), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 120651776 unmapped: 22085632 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 18718720 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f8a55000/0x0/0x4ffc00000, data 0x3318af2/0x3437000, compress 0x0/0x0/0x0, omap 0x22a39, meta 0x3d4d5c7), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18653184 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 18161664 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685613 data_alloc: 234881024 data_used: 24996492
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 18161664 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f89dc000/0x0/0x4ffc00000, data 0x3391af2/0x34b0000, compress 0x0/0x0/0x0, omap 0x22a39, meta 0x3d4d5c7), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 18161664 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 18161664 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 18161664 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.869594574s of 14.089269638s, submitted: 80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 20144128 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f89dc000/0x0/0x4ffc00000, data 0x3391af2/0x34b0000, compress 0x0/0x0/0x0, omap 0x22a39, meta 0x3d4d5c7), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1683581 data_alloc: 234881024 data_used: 24996492
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 20144128 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 20144128 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 20013056 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 20013056 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f89ba000/0x0/0x4ffc00000, data 0x33b3af2/0x34d2000, compress 0x0/0x0/0x0, omap 0x22a39, meta 0x3d4d5c7), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 ms_handle_reset con 0x56108b9adc00 session 0x56108b73bc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122986496 unmapped: 19750912 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 189 handle_osd_map epochs [189,190], i have 190, src has [1,190]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 190 ms_handle_reset con 0x56108b9ad000 session 0x561086dfafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1688839 data_alloc: 234881024 data_used: 24996508
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 19562496 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 190 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 ms_handle_reset con 0x56108b9ad400 session 0x56108b8c6000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 ms_handle_reset con 0x56108b9adc00 session 0x561086cbafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 19513344 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 ms_handle_reset con 0x56108b9ac800 session 0x561088d00e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 ms_handle_reset con 0x56108bd31400 session 0x561087b416c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 19513344 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f89a9000/0x0/0x4ffc00000, data 0x33bc77a/0x34df000, compress 0x0/0x0/0x0, omap 0x23221, meta 0x3d4cddf), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 19505152 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108be13c00 session 0x561088f47c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108be12400 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9ac800 session 0x561086dfa000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 19447808 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9ad400 session 0x56108947e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9adc00 session 0x561086cbae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1695373 data_alloc: 234881024 data_used: 24996508
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.129067421s of 10.683837891s, submitted: 57
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9ad800 session 0x561088da3340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108bd31400 session 0x561088d4f500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 18964480 heap: 142737408 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9acc00 session 0x561088da3dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9adc00 session 0x561088f46380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 4923392 heap: 163741696 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 129515520 unmapped: 42631168 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x5610896c7800 session 0x561086cdf6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x5610896d4800 session 0x561087baca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f35a6000/0x0/0x4ffc00000, data 0x87c1043/0x88e6000, compress 0x0/0x0/0x0, omap 0x23e05, meta 0x3d4c1fb), peers [0,1] op hist [0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 47718400 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x5610896d4800 session 0x561086cbb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x5610896c7800 session 0x561088d4efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9acc00 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f23dd000/0x0/0x4ffc00000, data 0x998c000/0x9aae000, compress 0x0/0x0/0x0, omap 0x241c2, meta 0x3d4be3e), peers [0,1] op hist [0,0,0,0,2,2])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125755392 unmapped: 46391296 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108b9adc00 session 0x56108944ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 ms_handle_reset con 0x56108bd31400 session 0x5610897ed180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2381691 data_alloc: 234881024 data_used: 14717049
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 192 handle_osd_map epochs [192,193], i have 193, src has [1,193]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 50495488 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125902848 unmapped: 46243840 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 193 ms_handle_reset con 0x5610896d4800 session 0x561087b9d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125919232 unmapped: 46227456 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 194 heartbeat osd_stat(store_statfs(0x4ef3b9000/0x0/0x4ffc00000, data 0xc9abaf1/0xcad1000, compress 0x0/0x0/0x0, omap 0x244ea, meta 0x3d4bb16), peers [0,1] op hist [0,0,0,0,0,1,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 50372608 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 194 ms_handle_reset con 0x56108b9acc00 session 0x561088f468c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 194 heartbeat osd_stat(store_statfs(0x4eb71a000/0x0/0x4ffc00000, data 0x1064b68d/0x10772000, compress 0x0/0x0/0x0, omap 0x24848, meta 0x3d4b7b8), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 50094080 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.148618221s of 10.023601532s, submitted: 437
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126149 data_alloc: 234881024 data_used: 14717049
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108be12400 session 0x5610897ec1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9adc00 session 0x561088c90c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9ad400 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896d4800 session 0x561087b39500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896c7800 session 0x56108bcfd6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9ac800 session 0x56108b8c7180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9acc00 session 0x561086d556c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 48922624 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9adc00 session 0x561089492540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 heartbeat osd_stat(store_statfs(0x4e7714000/0x0/0x4ffc00000, data 0x1464da69/0x14776000, compress 0x0/0x0/0x0, omap 0x2533a, meta 0x3d4acc6), peers [0,1] op hist [0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896c7800 session 0x561088d00540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123297792 unmapped: 48848896 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108b9ac800 session 0x5610894928c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 50135040 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896d4800 session 0x561088e20e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108ab44000 session 0x56108947efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108be12400 session 0x561086cdf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896d4800 session 0x56108bcfdc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x5610896c7800 session 0x561087b9ce00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 50249728 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108ab44000 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 ms_handle_reset con 0x56108ab44400 session 0x561086ecf6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44800 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108b9ac800 session 0x56108bcfd500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121667584 unmapped: 50479104 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896d4800 session 0x561087b41c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108b9acc00 session 0x561086ecee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44000 session 0x561086d55dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1736912 data_alloc: 234881024 data_used: 14718752
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 50675712 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 50675712 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561087b40fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44800 session 0x56108944ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44c00 session 0x56108947e1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 50659328 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab45000 session 0x561088c90e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f8c14000/0x0/0x4ffc00000, data 0x314f5a5/0x3278000, compress 0x0/0x0/0x0, omap 0x25d91, meta 0x3d4a26f), peers [0,1] op hist [0,0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 50634752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab45400 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561086cde380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 50585600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f8572000/0x0/0x4ffc00000, data 0x37ef617/0x391a000, compress 0x0/0x0/0x0, omap 0x25d91, meta 0x3d4a26f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1774689 data_alloc: 234881024 data_used: 14722766
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 196 handle_osd_map epochs [196,197], i have 197, src has [1,197]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.673086166s of 10.296282768s, submitted: 203
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 50536448 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab44800 session 0x5610897ece00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 47751168 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab44c00 session 0x56108b73b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab45800 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 50896896 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 198 ms_handle_reset con 0x56108ab45c00 session 0x561088f461c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 48922624 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 50372608 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1857643 data_alloc: 234881024 data_used: 14722766
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896d4400 session 0x5610897edc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 50372608 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab45000 session 0x56108947f500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f79de000/0x0/0x4ffc00000, data 0x437b7ce/0x44aa000, compress 0x0/0x0/0x0, omap 0x26796, meta 0x3d4986a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121692160 unmapped: 50454528 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896c7800 session 0x561088da3180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 50446336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 50446336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab45800 session 0x561087b15880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab44c00 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896c7800 session 0x561089492700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 50814976 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1875133 data_alloc: 234881024 data_used: 14723351
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x5610896d4400 session 0x561087bac8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45000 session 0x561086dfae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.727210999s of 10.117328644s, submitted: 81
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45800 session 0x561088da3a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 50806784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44800 session 0x561086d55500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f77ce000/0x0/0x4ffc00000, data 0x458e7dd/0x46be000, compress 0x0/0x0/0x0, omap 0x26796, meta 0x3d4986a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45000 session 0x561086dfb180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45800 session 0x561087b38e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x5610896d4800 session 0x561087b408c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44400 session 0x561088d4fdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44000 session 0x561088f47880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 50020352 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44000 session 0x56108b8c6e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f6fd8000/0x0/0x4ffc00000, data 0x4d83379/0x4eb4000, compress 0x0/0x0/0x0, omap 0x26c41, meta 0x3d493bf), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 200 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 201 ms_handle_reset con 0x5610896d4800 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 46219264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 46219264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 46186496 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 202 ms_handle_reset con 0x56108ab44400 session 0x561087b396c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914581 data_alloc: 234881024 data_used: 22064581
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 46178304 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 202 ms_handle_reset con 0x56108ab45000 session 0x56108947fdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 126328832 unmapped: 45817856 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x435eb66/0x4491000, compress 0x0/0x0/0x0, omap 0x272c6, meta 0x3d48d3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 203 ms_handle_reset con 0x56108ab45800 session 0x561088dfe000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 203 ms_handle_reset con 0x56108ab45800 session 0x56108944c000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 43417600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 203 handle_osd_map epochs [203,204], i have 204, src has [1,204]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1937046 data_alloc: 251658240 data_used: 27083091
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.480896950s of 10.668888092s, submitted: 93
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 204 ms_handle_reset con 0x5610896d4800 session 0x561087bac1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 204 heartbeat osd_stat(store_statfs(0x4f8095000/0x0/0x4ffc00000, data 0x3cc21b7/0x3df5000, compress 0x0/0x0/0x0, omap 0x278a2, meta 0x3d4875e), peers [0,1] op hist [0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab45000 session 0x561086e9a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 38330368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab44400 session 0x561088f47500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab44000 session 0x561088f47dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 38068224 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 38068224 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 38035456 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x56108ab44400 session 0x561086cbb6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x5610896d4800 session 0x561087b9cfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x56108ab45800 session 0x561088e20c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1980818 data_alloc: 251658240 data_used: 27115859
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 207 ms_handle_reset con 0x56108bd30400 session 0x561086e9a380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 207 ms_handle_reset con 0x56108ab45000 session 0x56108b8c6540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134307840 unmapped: 37838848 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x407b9d1/0x3eb3000, compress 0x0/0x0/0x0, omap 0x28121, meta 0x3d47edf), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134307840 unmapped: 37838848 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 208 ms_handle_reset con 0x5610896d4800 session 0x561088d4efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f7c59000/0x0/0x4ffc00000, data 0x43f25dd/0x422b000, compress 0x0/0x0/0x0, omap 0x2846d, meta 0x3d47b93), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137584640 unmapped: 34562048 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 34430976 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f7c54000/0x0/0x4ffc00000, data 0x43f41cd/0x422e000, compress 0x0/0x0/0x0, omap 0x285ae, meta 0x3d47a52), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 35962880 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x5610896c7800 session 0x561088d01dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x5610896d4400 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x56108ab44400 session 0x561088da3c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2021686 data_alloc: 251658240 data_used: 28264449
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x56108ab45800 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 35921920 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.097351074s of 10.501673698s, submitted: 128
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 35921920 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 210 ms_handle_reset con 0x56108ab45800 session 0x56108b8c7dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 35913728 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 35913728 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f7c4a000/0x0/0x4ffc00000, data 0x4406911/0x4240000, compress 0x0/0x0/0x0, omap 0x28ef1, meta 0x3d4710f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896d4800 session 0x561088d00a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896c7800 session 0x561088da2c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 35897344 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896d4400 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108bd31800 session 0x561088d001c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108ab44400 session 0x561088d4f500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108bd30400 session 0x561088d4f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f8cb9000/0x0/0x4ffc00000, data 0x3096003/0x31cd000, compress 0x0/0x0/0x0, omap 0x29593, meta 0x3d46a6d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1864928 data_alloc: 234881024 data_used: 20889914
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 213 ms_handle_reset con 0x5610896c7800 session 0x561086cbb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 213 handle_osd_map epochs [213,214], i have 214, src has [1,214]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1876702 data_alloc: 234881024 data_used: 20894894
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f8cb4000/0x0/0x4ffc00000, data 0x3099748/0x31d4000, compress 0x0/0x0/0x0, omap 0x29a64, meta 0x3d4659c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4400 session 0x56108947e8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 38715392 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.027520180s of 10.344891548s, submitted: 135
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4800 session 0x5610897ec380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4800 session 0x561087bac000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896c7800 session 0x56108b73ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4400 session 0x56108b73afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x56108ab44400 session 0x561086d49a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 37519360 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x56108bd30400 session 0x56108944dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134635520 unmapped: 37511168 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x56108bd30400 session 0x56108bcfc000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x5610896c7800 session 0x561086cde8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f84d6000/0x0/0x4ffc00000, data 0x3871a6b/0x39b2000, compress 0x0/0x0/0x0, omap 0x2a4f0, meta 0x3d45b10), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 216 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 37494784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938593 data_alloc: 234881024 data_used: 20896336
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 218 ms_handle_reset con 0x5610896d4400 session 0x5610897ec000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 37453824 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 218 ms_handle_reset con 0x5610896d4800 session 0x561086d55a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 219 ms_handle_reset con 0x56108ab44400 session 0x561088dfec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 37453824 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 219 ms_handle_reset con 0x5610896c7800 session 0x56108b73a000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f84cd000/0x0/0x4ffc00000, data 0x3876c34/0x39b8000, compress 0x0/0x0/0x0, omap 0x2acf8, meta 0x3d45308), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 219 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x5610896d4400 session 0x5610897ed880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x3878824/0x39bb000, compress 0x0/0x0/0x0, omap 0x2afea, meta 0x3d45016), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 37675008 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9acc00 session 0x56108bcfd880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9ac800 session 0x56108944c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108ab45800 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x3878824/0x39bb000, compress 0x0/0x0/0x0, omap 0x2afea, meta 0x3d45016), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1872001 data_alloc: 234881024 data_used: 19780688
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2cd7824/0x2e1a000, compress 0x0/0x0/0x0, omap 0x2b0d2, meta 0x3d44f2e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x5610896c7800 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.550335884s of 10.746232986s, submitted: 100
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9ac800 session 0x56108944c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2cd7824/0x2e1a000, compress 0x0/0x0/0x0, omap 0x2b2e5, meta 0x3d44d1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 221 ms_handle_reset con 0x56108b9acc00 session 0x561088e21340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 41402368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x561089a46400 session 0x561088c90e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x5610899ee400 session 0x561086e9ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x5610896d4400 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 41394176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1866350 data_alloc: 234881024 data_used: 19780688
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 41394176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x561089a46400 session 0x561088d4fa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f93aa000/0x0/0x4ffc00000, data 0x2994a13/0x2adb000, compress 0x0/0x0/0x0, omap 0x2bc0a, meta 0x3d443f6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x5610896c7800 session 0x561087bac380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 40337408 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f93aa000/0x0/0x4ffc00000, data 0x2994a13/0x2adb000, compress 0x0/0x0/0x0, omap 0x2bc0a, meta 0x3d443f6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x56108b9ac800 session 0x561086cbbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 225 ms_handle_reset con 0x56108b9acc00 session 0x561086dfa8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 40304640 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 225 ms_handle_reset con 0x5610896c7800 session 0x56108944da40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134258688 unmapped: 37888000 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 226 ms_handle_reset con 0x5610896d4400 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 226 ms_handle_reset con 0x56108b9ac800 session 0x561089492a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925160 data_alloc: 234881024 data_used: 20200335
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 227 ms_handle_reset con 0x5610899eec00 session 0x561086e9bc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134840320 unmapped: 37306368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x5610899ef000 session 0x561086cbba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x5610896c7800 session 0x561086ecfc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x561089a46400 session 0x561088e216c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f8de9000/0x0/0x4ffc00000, data 0x2f38b2c/0x3086000, compress 0x0/0x0/0x0, omap 0x2cd98, meta 0x3d43268), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.801876068s of 10.115190506s, submitted: 160
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 37298176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 229 ms_handle_reset con 0x5610896d4400 session 0x561087bad340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 229 ms_handle_reset con 0x5610899eec00 session 0x561087b141c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1931807 data_alloc: 234881024 data_used: 20200335
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x56108b9ac800 session 0x56108947ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610896c7800 session 0x56108b73bc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610896d4400 session 0x56108b8c61c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610899eec00 session 0x561088d4f180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x561089a46400 session 0x56108b8c68c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 36659200 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610899ef400 session 0x5610897eddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f873d000/0x0/0x4ffc00000, data 0x35fbe31/0x374d000, compress 0x0/0x0/0x0, omap 0x2d560, meta 0x3d42aa0), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 231 ms_handle_reset con 0x5610896d4400 session 0x561087bac540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610896c7800 session 0x5610897ec1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899eec00 session 0x56108944d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f8733000/0x0/0x4ffc00000, data 0x35ff5a1/0x3753000, compress 0x0/0x0/0x0, omap 0x2dbe6, meta 0x3d4241a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899ef400 session 0x561086ecea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x561089a46400 session 0x561088f46380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899ef800 session 0x561087b9c000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896c7800 session 0x56108b73a540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896d4400 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1983852 data_alloc: 234881024 data_used: 20200335
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899ef400 session 0x561088d01880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896c8000 session 0x561086cdfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 35987456 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899eec00 session 0x56108b73b180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899efc00 session 0x56108944cc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x362c198/0x3782000, compress 0x0/0x0/0x0, omap 0x2df47, meta 0x3d420b9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136167424 unmapped: 35979264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x362c198/0x3782000, compress 0x0/0x0/0x0, omap 0x2df47, meta 0x3d420b9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 33669120 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 234 ms_handle_reset con 0x5610899ef800 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 234 ms_handle_reset con 0x5610899ef400 session 0x5610894921c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f8705000/0x0/0x4ffc00000, data 0x362dd34/0x3785000, compress 0x0/0x0/0x0, omap 0x2e251, meta 0x3d41daf), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 33669120 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.314812660s of 11.476747513s, submitted: 89
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 235 ms_handle_reset con 0x561089a46400 session 0x561087b9cc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 235 ms_handle_reset con 0x5610899ef400 session 0x561087bad180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 33398784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x5610899eec00 session 0x56108944d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x56108b9ac800 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x5610899ef800 session 0x561086d55340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2034221 data_alloc: 234881024 data_used: 25958031
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 33382400 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 237 ms_handle_reset con 0x561089705c00 session 0x56108b73ba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 237 ms_handle_reset con 0x5610899efc00 session 0x56108b8c7880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 33382400 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138813440 unmapped: 33333248 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 238 ms_handle_reset con 0x5610899ef400 session 0x561087b41880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f86f7000/0x0/0x4ffc00000, data 0x3635167/0x3793000, compress 0x0/0x0/0x0, omap 0x2eccd, meta 0x3d41333), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138813440 unmapped: 33333248 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 238 ms_handle_reset con 0x5610899ef800 session 0x561089493dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x56108b9ac800 session 0x561088c91a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x5610899eec00 session 0x561088d00540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x5610899ef400 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138870784 unmapped: 33275904 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 240 ms_handle_reset con 0x5610899ef800 session 0x561088e21180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f86f1000/0x0/0x4ffc00000, data 0x3637210/0x3797000, compress 0x0/0x0/0x0, omap 0x2ee1f, meta 0x3d411e1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2047515 data_alloc: 234881024 data_used: 25958616
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x5610899efc00 session 0x561087b9c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138887168 unmapped: 33259520 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x56108b9ac800 session 0x561088d4e700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x561089705800 session 0x561088d4e8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899ef800 session 0x561087bac700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899ef400 session 0x561086e9b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 142696448 unmapped: 29450240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899efc00 session 0x56108b73b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8151000/0x0/0x4ffc00000, data 0x3bccbf6/0x3d2f000, compress 0x0/0x0/0x0, omap 0x2f761, meta 0x3d4089f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 29032448 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8154000/0x0/0x4ffc00000, data 0x3bd5bf6/0x3d38000, compress 0x0/0x0/0x0, omap 0x2f761, meta 0x3d4089f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 30556160 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 243 ms_handle_reset con 0x561088793400 session 0x56108947f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 243 ms_handle_reset con 0x56108b9ac800 session 0x561088da2fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.641407013s of 10.993301392s, submitted: 175
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2103713 data_alloc: 234881024 data_used: 26803489
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x561088793400 session 0x561089493500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x5610896d4800 session 0x561088e201c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x56108bd30400 session 0x56108bcfd180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 245 ms_handle_reset con 0x5610899ef400 session 0x56108bcfcfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f8eb7000/0x0/0x4ffc00000, data 0x2e69c66/0x2fd1000, compress 0x0/0x0/0x0, omap 0x303e7, meta 0x3d3fc19), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 247 handle_osd_map epochs [247,248], i have 248, src has [1,248]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 248 ms_handle_reset con 0x5610899ef800 session 0x561086ecfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1982022 data_alloc: 234881024 data_used: 18270598
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 249 ms_handle_reset con 0x561088793400 session 0x561088e20700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 249 ms_handle_reset con 0x5610896d4800 session 0x561086cba540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8ead000/0x0/0x4ffc00000, data 0x2e6f0e0/0x2fd9000, compress 0x0/0x0/0x0, omap 0x30cf6, meta 0x3d3f30a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136904704 unmapped: 35241984 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136929280 unmapped: 35217408 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 251 ms_handle_reset con 0x5610899ef400 session 0x561088d008c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 251 ms_handle_reset con 0x5610899efc00 session 0x561087b14e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 35192832 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136962048 unmapped: 35184640 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.698543549s of 10.047485352s, submitted: 230
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 253 ms_handle_reset con 0x5610891a4c00 session 0x561086d48c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1993876 data_alloc: 234881024 data_used: 18271852
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136986624 unmapped: 35160064 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 254 ms_handle_reset con 0x561088793400 session 0x561088e20380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136994816 unmapped: 35151872 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 254 ms_handle_reset con 0x5610896d4800 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f8ea1000/0x0/0x4ffc00000, data 0x2e77da4/0x2fe7000, compress 0x0/0x0/0x0, omap 0x320d3, meta 0x3d3df2d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136994816 unmapped: 35151872 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 255 ms_handle_reset con 0x5610899ef400 session 0x561087b9ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 255 ms_handle_reset con 0x5610899efc00 session 0x561087bad180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137011200 unmapped: 35135488 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8ea0000/0x0/0x4ffc00000, data 0x2e799d4/0x2fea000, compress 0x0/0x0/0x0, omap 0x3238c, meta 0x3d3dc74), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 256 ms_handle_reset con 0x561088d2a400 session 0x561086ecf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x2e7b604/0x2fed000, compress 0x0/0x0/0x0, omap 0x326bb, meta 0x3d3d945), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 35127296 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 256 handle_osd_map epochs [256,257], i have 257, src has [1,257]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 257 ms_handle_reset con 0x561088793400 session 0x56108947e380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2006123 data_alloc: 234881024 data_used: 18271852
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137060352 unmapped: 35086336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610899ef400 session 0x561087b9dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4800 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f8e95000/0x0/0x4ffc00000, data 0x2e7f297/0x2ff5000, compress 0x0/0x0/0x0, omap 0x3325f, meta 0x3d3cda1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 35069952 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 35069952 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x56108bd30400 session 0x561086dfafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610899efc00 session 0x561088e21180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137093120 unmapped: 35053568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896c7800 session 0x561088f46a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4400 session 0x561086290540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4800 session 0x56108b73a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561086f18c00 session 0x561089493dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610899ef400 session 0x561086cba000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561088793400 session 0x56108b73b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561086f18c00 session 0x56108b8c7880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610896d4400 session 0x561087b38380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 41369600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610896d4800 session 0x561087bad500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f9ae2000/0x0/0x4ffc00000, data 0x222ae5f/0x23a2000, compress 0x0/0x0/0x0, omap 0x336dd, meta 0x3d3c923), peers [0,1] op hist [0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610899eb800 session 0x561086d55dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 260 ms_handle_reset con 0x5610896c7800 session 0x56108947e1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 260 ms_handle_reset con 0x56108bd30400 session 0x56108944ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1905745 data_alloc: 234881024 data_used: 11573391
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 41304064 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f9adc000/0x0/0x4ffc00000, data 0x2202abe/0x237b000, compress 0x0/0x0/0x0, omap 0x33bb6, meta 0x3d3c44a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.792389870s of 11.051105499s, submitted: 170
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 261 ms_handle_reset con 0x561086f18c00 session 0x56108b73b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 261 heartbeat osd_stat(store_statfs(0x4f9ad8000/0x0/0x4ffc00000, data 0x22046b7/0x237d000, compress 0x0/0x0/0x0, omap 0x33d8a, meta 0x3d3c276), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 41246720 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 262 ms_handle_reset con 0x561088793400 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 262 ms_handle_reset con 0x5610896d4400 session 0x561086cbb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 263 ms_handle_reset con 0x5610896d4800 session 0x561088dfee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 41205760 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x2205e2b/0x237d000, compress 0x0/0x0/0x0, omap 0x343e5, meta 0x3d3bc1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 263 handle_osd_map epochs [263,264], i have 264, src has [1,264]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912743 data_alloc: 234881024 data_used: 11581415
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561086f18c00 session 0x561086cdfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561088793400 session 0x56108bcfd180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x2209566/0x2383000, compress 0x0/0x0/0x0, omap 0x33ac1, meta 0x3d3c53f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896c7800 session 0x56108b8c7880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x56108bd30400 session 0x561088d4e700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561088793400 session 0x561086dfb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896c7800 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896d4800 session 0x56108bcfc700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x56108bd30400 session 0x561086cba540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x5610899eb000 session 0x561088e20a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x5610896c7800 session 0x5610894921c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610899ebc00 session 0x56108947ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561087b9c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896d4800 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561086f18c00 session 0x561088f47340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925905 data_alloc: 234881024 data_used: 14727159
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135307264 unmapped: 36839424 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.875901222s of 10.079591751s, submitted: 157
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896c7800 session 0x56108944dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896d4800 session 0x561089493dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610899ebc00 session 0x561088c91180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x56108bd30400 session 0x561088f46700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135249920 unmapped: 36896768 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9afa000/0x0/0x4ffc00000, data 0x220eb6d/0x238e000, compress 0x0/0x0/0x0, omap 0x32c9b, meta 0x3d3d365), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561087b38380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896c7800 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135258112 unmapped: 36888576 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9afc000/0x0/0x4ffc00000, data 0x220eb8d/0x2390000, compress 0x0/0x0/0x0, omap 0x35fd7, meta 0x3d3a029), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x5610896d4800 session 0x561086cba380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135266304 unmapped: 36880384 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x5610899ebc00 session 0x56108947e380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x56108bd31000 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134701056 unmapped: 37445632 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 269 ms_handle_reset con 0x561088793400 session 0x561087b9c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1989818 data_alloc: 234881024 data_used: 14727772
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x2a52dc2/0x2bd7000, compress 0x0/0x0/0x0, omap 0x3780b, meta 0x3d387f5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896c7800 session 0x561088dfee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896d4800 session 0x561087bad6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610899ebc00 session 0x561087baca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x56108bd31400 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x561088793400 session 0x561087b39180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896c7800 session 0x561086cbafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896d4800 session 0x561088e20700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 271 ms_handle_reset con 0x5610899ebc00 session 0x56108944ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x2215a20/0x239a000, compress 0x0/0x0/0x0, omap 0x379b7, meta 0x3d38649), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x56108bd31800 session 0x561088d001c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946217 data_alloc: 234881024 data_used: 14728727
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x561088793400 session 0x561086d55340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x5610896c7800 session 0x561086cbbc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 37396480 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x5610896d4800 session 0x56108b73bdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 37396480 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x56108bd30800 session 0x56108947f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.460597038s of 10.737349510s, submitted: 154
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x5610899ebc00 session 0x561088f47180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x561088d2fc00 session 0x561088d4e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x561088793400 session 0x561086e9b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108bd30800 session 0x56108947e1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9ae9000/0x0/0x4ffc00000, data 0x2219288/0x239f000, compress 0x0/0x0/0x0, omap 0x37e65, meta 0x3d3819b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946443 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9aee000/0x0/0x4ffc00000, data 0x22191e6/0x239e000, compress 0x0/0x0/0x0, omap 0x37e65, meta 0x3d3819b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108be12000 session 0x56108b73b180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108be13800 session 0x561088e21180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135553024 unmapped: 36593664 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9aed000/0x0/0x4ffc00000, data 0x22191f6/0x239f000, compress 0x0/0x0/0x0, omap 0x38004, meta 0x3d37ffc), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 273 handle_osd_map epochs [273,274], i have 274, src has [1,274]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1954831 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9ae7000/0x0/0x4ffc00000, data 0x221ac85/0x23a3000, compress 0x0/0x0/0x0, omap 0x38153, meta 0x3d37ead), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9ae7000/0x0/0x4ffc00000, data 0x221ac85/0x23a3000, compress 0x0/0x0/0x0, omap 0x38153, meta 0x3d37ead), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1954831 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088793400 session 0x561088da2fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088d2fc00 session 0x561089493880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.987413406s of 15.062505722s, submitted: 45
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108bd30800 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108be12000 session 0x561086ecf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108be13c00 session 0x561086d54c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088793400 session 0x56108947ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 35504128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 274 handle_osd_map epochs [274,275], i have 275, src has [1,275]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f9aea000/0x0/0x4ffc00000, data 0x221ac75/0x23a2000, compress 0x0/0x0/0x0, omap 0x386b4, meta 0x3d3794c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 275 ms_handle_reset con 0x56108be12000 session 0x561088f46700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 35504128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108bd30800 session 0x561087b39340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be13c00 session 0x5610894921c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088d2fc00 session 0x561087bad180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962941 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 35487744 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088d2fc00 session 0x561086cba540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088793400 session 0x561086cbae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 35487744 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136724480 unmapped: 56426496 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136781824 unmapped: 56369152 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 56344576 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be13c00 session 0x561086ecefc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f2ee3000/0x0/0x4ffc00000, data 0x8e1e8f1/0x8fa9000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [0,0,0,0,1,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739045 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f16e1000/0x0/0x4ffc00000, data 0xa61e963/0xa7ab000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 55222272 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4eeee1000/0x0/0x4ffc00000, data 0xce1e963/0xcfab000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 55197696 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.002960205s of 10.177382469s, submitted: 82
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 142229504 unmapped: 50921472 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108bd30800 session 0x561089493500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be12400 session 0x561086ecee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be12000 session 0x561088e20380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138092544 unmapped: 55058432 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be13c00 session 0x561086dfb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 55123968 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be12800 session 0x56108bcfd180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088793400 session 0x5610897ec700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088d2fc00 session 0x561088e20380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561089493500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325467 data_alloc: 234881024 data_used: 14729883
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137289728 unmapped: 55861248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be12000 session 0x561086ecefc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be13c00 session 0x561087bad180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 55828480 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 heartbeat osd_stat(store_statfs(0x4e9e8f000/0x0/0x4ffc00000, data 0x11e6e0e6/0x11ffd000, compress 0x0/0x0/0x0, omap 0x39447, meta 0x3d36bb9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 56074240 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561088f46c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108695c400 session 0x56108b8c7180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137084928 unmapped: 56066048 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x561089705000 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108ca42000 session 0x56108947ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108ca42400 session 0x5610894921c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108695c400 session 0x561086ecee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x561089705000 session 0x561086dfb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108be12c00 session 0x56108b8c76c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108bd30800 session 0x561088e20a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108ca42000 session 0x561086dfa000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108be12000 session 0x561086dfa540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108695c400 session 0x561089493dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561089705000 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088793400 session 0x561088da2fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 55443456 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088d2fc00 session 0x561087b9c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108695c400 session 0x561087b9ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398680 data_alloc: 234881024 data_used: 14729981
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 55410688 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 heartbeat osd_stat(store_statfs(0x4e9462000/0x0/0x4ffc00000, data 0x1289192a/0x12a26000, compress 0x0/0x0/0x0, omap 0x3997d, meta 0x3d36683), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088793400 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 55410688 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.954721451s of 10.590643883s, submitted: 109
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 280 ms_handle_reset con 0x561089705000 session 0x561087b38700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 55386112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 280 ms_handle_reset con 0x56108be12c00 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e9460000/0x0/0x4ffc00000, data 0x12893921/0x12a2a000, compress 0x0/0x0/0x0, omap 0x39b3a, meta 0x3d364c6), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108be12000 session 0x5610897ece00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108bd30800 session 0x561089492700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108be12000 session 0x561087b9c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 55353344 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108695c400 session 0x561088d4efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x561088793400 session 0x561088c90a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 55197696 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 282 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c6380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 282 ms_handle_reset con 0x56108695c400 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504440 data_alloc: 234881024 data_used: 14734627
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138059776 unmapped: 55091200 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 283 ms_handle_reset con 0x56108be12000 session 0x561087bac700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 139444224 unmapped: 53706752 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 146718720 unmapped: 46432256 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 284 ms_handle_reset con 0x56108ca42c00 session 0x561086cbbc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 284 heartbeat osd_stat(store_statfs(0x4dec2e000/0x0/0x4ffc00000, data 0x1d0be4ea/0x1d25c000, compress 0x0/0x0/0x0, omap 0x3b587, meta 0x3d34a79), peers [0,1] op hist [0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 49078272 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 44048384 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x56108bd30800 session 0x561086e9ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x561089705000 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x56108ca43000 session 0x561087b9dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985933 data_alloc: 234881024 data_used: 24481363
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145104896 unmapped: 48046080 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108695c400 session 0x561087b9d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 48037888 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.811054707s of 10.003942490s, submitted: 466
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108be12000 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108ca42c00 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145145856 unmapped: 48005120 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 287 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 47988736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 287 heartbeat osd_stat(store_statfs(0x4d7c29000/0x0/0x4ffc00000, data 0x240c37fb/0x24261000, compress 0x0/0x0/0x0, omap 0x3bc8d, meta 0x3d34373), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 47988736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991803 data_alloc: 234881024 data_used: 24486587
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 heartbeat osd_stat(store_statfs(0x4d7c24000/0x0/0x4ffc00000, data 0x240c52b2/0x24264000, compress 0x0/0x0/0x0, omap 0x3c015, meta 0x3d33feb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 47955968 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108695c400 session 0x56108b73a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x561089705000 session 0x561086dfa000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 43466752 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 43343872 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108be12000 session 0x56108bcfca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108ca43000 session 0x561086d48380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 heartbeat osd_stat(store_statfs(0x4d6626000/0x0/0x4ffc00000, data 0x245172c2/0x246b7000, compress 0x0/0x0/0x0, omap 0x3c015, meta 0x4ed3feb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 43991040 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 148545536 unmapped: 44605440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108ca43000 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108695c400 session 0x56108b73ba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5040814 data_alloc: 234881024 data_used: 25174715
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x561088d2fc00 session 0x56108947f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x561089705000 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108be12000 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149209088 unmapped: 43941888 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108be12000 session 0x561087b416c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 43925504 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x56108695c400 session 0x56108947ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x561088d2fc00 session 0x561089492fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x561089705000 session 0x561089492000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 heartbeat osd_stat(store_statfs(0x4d5f43000/0x0/0x4ffc00000, data 0x24c02b2f/0x24da7000, compress 0x0/0x0/0x0, omap 0x3cca7, meta 0x4ed3359), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149258240 unmapped: 43892736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.983293533s of 10.446523666s, submitted: 220
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x56108ca43400 session 0x561086290540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149258240 unmapped: 43892736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 291 ms_handle_reset con 0x561088d2fc00 session 0x561087b41180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 42991616 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 292 ms_handle_reset con 0x56108695c400 session 0x561086cdefc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 292 ms_handle_reset con 0x56108ca43000 session 0x56108b73b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5128883 data_alloc: 234881024 data_used: 25175372
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 43589632 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 43581440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 43581440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 292 heartbeat osd_stat(store_statfs(0x4d5a74000/0x0/0x4ffc00000, data 0x250ce302/0x25276000, compress 0x0/0x0/0x0, omap 0x3d6b7, meta 0x4ed2949), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 43474944 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 43458560 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5148337 data_alloc: 234881024 data_used: 25183564
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 293 ms_handle_reset con 0x561089705000 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 293 heartbeat osd_stat(store_statfs(0x4d586d000/0x0/0x4ffc00000, data 0x252d1f81/0x2547c000, compress 0x0/0x0/0x0, omap 0x3d7fa, meta 0x4ed2806), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 293 heartbeat osd_stat(store_statfs(0x4d586d000/0x0/0x4ffc00000, data 0x252d1f81/0x2547c000, compress 0x0/0x0/0x0, omap 0x3d7fa, meta 0x4ed2806), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928338051s of 10.054156303s, submitted: 67
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 43393024 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 294 ms_handle_reset con 0x56108be12000 session 0x561086dfafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 43393024 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561088d2fc00 session 0x56108947fdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x56108bcfda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43000 session 0x561086d55c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108695c400 session 0x561086d55dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca42000 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43c00 session 0x56108944ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43800 session 0x561088d4fc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 heartbeat osd_stat(store_statfs(0x4d5868000/0x0/0x4ffc00000, data 0x252d56b9/0x25482000, compress 0x0/0x0/0x0, omap 0x3dcd6, meta 0x4ed232a), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5196183 data_alloc: 251658240 data_used: 29943116
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108695c400 session 0x561086cbb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561088d2fc00 session 0x561087badc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43000 session 0x561088da3880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x561086d55880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43800 session 0x561087b9cc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 35979264 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 296 ms_handle_reset con 0x56108695c400 session 0x561086cde380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157204480 unmapped: 35946496 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157220864 unmapped: 35930112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 60K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4882 syncs, 3.07 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9331 writes, 36K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 23.64 MB, 0.04 MB/s#012Interval WAL: 9331 writes, 3965 syncs, 2.35 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157220864 unmapped: 35930112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108ca43c00 session 0x56108b73a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108695c400 session 0x561086dfac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 38641664 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 heartbeat osd_stat(store_statfs(0x4d4f0d000/0x0/0x4ffc00000, data 0x25c2fe61/0x25ddf000, compress 0x0/0x0/0x0, omap 0x3e41d, meta 0x4ed1be3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5241320 data_alloc: 251658240 data_used: 29989955
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 38641664 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x561086f41800 session 0x561086d55dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896d5800 session 0x5610897ecfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6c00 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.773379326s of 11.058368683s, submitted: 45
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6400 session 0x561088dfe000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c7000 session 0x561087bac700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108695c400 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x561086f41800 session 0x56108b8c7340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6c00 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 heartbeat osd_stat(store_statfs(0x4d4f0c000/0x0/0x4ffc00000, data 0x25c2fe71/0x25de0000, compress 0x0/0x0/0x0, omap 0x3e708, meta 0x4ed18f8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 38461440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247586 data_alloc: 251658240 data_used: 30114371
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 38461440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 38453248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 38453248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4f07000/0x0/0x4ffc00000, data 0x25c318f0/0x25de3000, compress 0x0/0x0/0x0, omap 0x3eab2, meta 0x4ed154e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896d5800 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154394624 unmapped: 38756352 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 38748160 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5250786 data_alloc: 251658240 data_used: 30658115
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 156655616 unmapped: 36495360 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161832960 unmapped: 31318016 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4b2e000/0x0/0x4ffc00000, data 0x266f38f0/0x261be000, compress 0x0/0x0/0x0, omap 0x3f277, meta 0x4ed0d89), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561086dfa000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc400 session 0x561086d48380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x561086cdf880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c8800 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 29237248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610888ab800 session 0x56108944c8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc400 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c8800 session 0x56108bcfca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x5610897edc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4239000/0x0/0x4ffc00000, data 0x26fe38f0/0x26ab3000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4239000/0x0/0x4ffc00000, data 0x26fe38f0/0x26ab3000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5425969 data_alloc: 251658240 data_used: 33798723
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.578448296s of 13.991296768s, submitted: 141
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x561088d2ec00 session 0x561088d4f500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165052416 unmapped: 28098560 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4216000/0x0/0x4ffc00000, data 0x27005913/0x26ad6000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x561089705000 session 0x561089492fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x56108ca43000 session 0x561087b41340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x56108ca43800 session 0x561089492540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169082880 unmapped: 24068096 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5461090 data_alloc: 251658240 data_used: 39815747
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4259000/0x0/0x4ffc00000, data 0x26fa4913/0x26a75000, compress 0x0/0x0/0x0, omap 0x3f5f6, meta 0x4ed0a0a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171024384 unmapped: 22126592 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 21929984 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4259000/0x0/0x4ffc00000, data 0x26fa4913/0x26a75000, compress 0x0/0x0/0x0, omap 0x3f5f6, meta 0x4ed0a0a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x561088c91a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 21823488 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 299 ms_handle_reset con 0x5610893bc800 session 0x561086e9a8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172507136 unmapped: 20643840 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 299 ms_handle_reset con 0x5610896c9400 session 0x561088f47a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 300 ms_handle_reset con 0x5610896c8800 session 0x561086d548c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 20619264 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5519098 data_alloc: 251658240 data_used: 39840339
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 18997248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 301 ms_handle_reset con 0x561089705000 session 0x5610897ecc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 18956288 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d3ba8000/0x0/0x4ffc00000, data 0x27735c3b/0x27142000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 18956288 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.297831535s of 10.926359177s, submitted: 71
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173834240 unmapped: 19316736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173883392 unmapped: 19267584 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5608546 data_alloc: 251658240 data_used: 40423580
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d304e000/0x0/0x4ffc00000, data 0x28288c3b/0x27c95000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 18432000 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d2fab000/0x0/0x4ffc00000, data 0x2832cbd9/0x27d38000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 302 ms_handle_reset con 0x56108ca43000 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 302 ms_handle_reset con 0x5610893bc800 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 302 heartbeat osd_stat(store_statfs(0x4d3ace000/0x0/0x4ffc00000, data 0x277487b9/0x2721c000, compress 0x0/0x0/0x0, omap 0x405ed, meta 0x4ecfa13), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 302 heartbeat osd_stat(store_statfs(0x4d3acd000/0x0/0x4ffc00000, data 0x277487c9/0x2721d000, compress 0x0/0x0/0x0, omap 0x405ed, meta 0x4ecfa13), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 303 ms_handle_reset con 0x5610896c8800 session 0x56108b73a8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173490176 unmapped: 19660800 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 ms_handle_reset con 0x561089705000 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 ms_handle_reset con 0x5610896c9400 session 0x56108bcfdc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5544312 data_alloc: 251658240 data_used: 40427676
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 18554880 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 heartbeat osd_stat(store_statfs(0x4d28ff000/0x0/0x4ffc00000, data 0x27770e2c/0x27249000, compress 0x0/0x0/0x0, omap 0x40f22, meta 0x606f0de), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 51765248 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 192200704 unmapped: 34545664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 heartbeat osd_stat(store_statfs(0x4cf8fe000/0x0/0x4ffc00000, data 0x2a770e3b/0x2a24a000, compress 0x0/0x0/0x0, omap 0x40faa, meta 0x606f056), peers [0,1] op hist [0,0,0,0,0,1,3])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175570944 unmapped: 51175424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.440034866s of 10.162814140s, submitted: 189
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184696832 unmapped: 42049536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6979924 data_alloc: 251658240 data_used: 40427948
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 46710784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x561086fbe400 session 0x56108b73b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x56108ca43800 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610893bc800 session 0x56108947f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610896c8800 session 0x561088dfec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 46489600 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 heartbeat osd_stat(store_statfs(0x4c06ca000/0x0/0x4ffc00000, data 0x399a48aa/0x3947e000, compress 0x0/0x0/0x0, omap 0x4138a, meta 0x606ec76), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180404224 unmapped: 46342144 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610896c9400 session 0x561086cbaa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 heartbeat osd_stat(store_statfs(0x4cdaca000/0x0/0x4ffc00000, data 0x2b1a989b/0x2ac82000, compress 0x0/0x0/0x0, omap 0x41412, meta 0x606ebee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178274304 unmapped: 48472064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x561089705000 session 0x561087b9d880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178274304 unmapped: 48472064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610893bc800 session 0x561086e9afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x561088d2ec00 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610893bc400 session 0x561086dfafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5647563 data_alloc: 251658240 data_used: 40427948
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610896c8800 session 0x56108bcfddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 52772864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x56108ca43800 session 0x56108b8c6380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x561088d2ec00 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 51732480 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 49528832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 307 heartbeat osd_stat(store_statfs(0x4d39f4000/0x0/0x4ffc00000, data 0x26680458/0x26158000, compress 0x0/0x0/0x0, omap 0x418bd, meta 0x606e743), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 307 ms_handle_reset con 0x5610896c9400 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 49332224 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.130283356s of 10.143652916s, submitted: 340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610893bc400 session 0x561088e208c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610893bc800 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610896d5800 session 0x561086cbbc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x561087057c00 session 0x561087b14a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5374520 data_alloc: 251658240 data_used: 37076016
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610896d5800 session 0x561087b15880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x5610896c8800 session 0x561088d4f180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561088d2ec00 session 0x56108cc2a000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 50946048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x5610893bc400 session 0x561088d4fdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561087057c00 session 0x56108b73a8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 50864128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561088d2ec00 session 0x561086e9a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 heartbeat osd_stat(store_statfs(0x4e4161000/0x0/0x4ffc00000, data 0x1383026e/0x139e8000, compress 0x0/0x0/0x0, omap 0x42d55, meta 0x606d2ab), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 310 handle_osd_map epochs [311,311], i have 311, src has [1,311]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 311 ms_handle_reset con 0x5610896c8800 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174563328 unmapped: 52183040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2661024 data_alloc: 251658240 data_used: 36244332
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 313 ms_handle_reset con 0x5610896d5800 session 0x561087b9c8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 313 ms_handle_reset con 0x5610893bc800 session 0x561088f46a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x56108be12c00 session 0x561086cbae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x56108ca42800 session 0x561088da21c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583928108s of 10.055441856s, submitted: 298
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x5610893bc800 session 0x56108947f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f664a000/0x0/0x4ffc00000, data 0x3346151/0x3502000, compress 0x0/0x0/0x0, omap 0x43811, meta 0x606c7ef), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2417537 data_alloc: 234881024 data_used: 16199279
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f752b000/0x0/0x4ffc00000, data 0x2462ba6/0x261e000, compress 0x0/0x0/0x0, omap 0x43936, meta 0x606c6ca), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f752b000/0x0/0x4ffc00000, data 0x2462ba6/0x261e000, compress 0x0/0x0/0x0, omap 0x43936, meta 0x606c6ca), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 315 ms_handle_reset con 0x561087057c00 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 67551232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 67534848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401471 data_alloc: 234881024 data_used: 14101517
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523480490' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 315 handle_osd_map epochs [317,317], i have 315, src has [1,317]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 315 handle_osd_map epochs [316,317], i have 315, src has [1,317]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561088d2ec00 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f7727000/0x0/0x4ffc00000, data 0x226503c/0x2423000, compress 0x0/0x0/0x0, omap 0x43dba, meta 0x606c246), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561087057c00 session 0x561087b15880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x56108ca42800 session 0x561087b14a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x56108be12c00 session 0x56108947f340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x5610896c8800 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 67624960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.965824127s of 10.050975800s, submitted: 67
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x5610896d5800 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561087057c00 session 0x56108bcfddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 67608576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409832 data_alloc: 234881024 data_used: 14102137
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 318 ms_handle_reset con 0x5610896c8800 session 0x561087b15500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x5610896d5800 session 0x56108b73a540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f771e000/0x0/0x4ffc00000, data 0x2268be4/0x242a000, compress 0x0/0x0/0x0, omap 0x443f1, meta 0x606bc0f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f771e000/0x0/0x4ffc00000, data 0x2268be4/0x242a000, compress 0x0/0x0/0x0, omap 0x443f1, meta 0x606bc0f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x56108be12c00 session 0x561089492c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x5610896c9400 session 0x56108b8c6c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x56108ca42800 session 0x561088f46c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x561087057c00 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158375936 unmapped: 68370432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2421042 data_alloc: 234881024 data_used: 14102102
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 68321280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x561088da2380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 68321280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c9400 session 0x561087b39c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896d5800 session 0x561086d54700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x56108bcfca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c9400 session 0x56108947efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x56108ca42800 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 68255744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x56108be12c00 session 0x561087b9d880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x561087057c00 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.975288391s of 10.086947441s, submitted: 55
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f771b000/0x0/0x4ffc00000, data 0x226a850/0x2430000, compress 0x0/0x0/0x0, omap 0x4480f, meta 0x606b7f1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c9400 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158539776 unmapped: 68206592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108ca42800 session 0x56108b73a540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436001 data_alloc: 234881024 data_used: 14102722
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 68198400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561088da21c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561087b39c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x561087057c00 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 68198400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c8800 session 0x561086d48380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c9400 session 0x561087b396c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108ca42800 session 0x5610897edc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158515200 unmapped: 68231168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x226c7e5/0x2434000, compress 0x0/0x0/0x0, omap 0x44b1f, meta 0x606b4e1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c8800 session 0x5610897ed880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561087057c00 session 0x561086e9b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c9400 session 0x561086cba8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561088d3f800 session 0x561086d54700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2438717 data_alloc: 234881024 data_used: 14103370
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x56108695d800 session 0x561088e208c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561087057c00 session 0x561086e9afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f7713000/0x0/0x4ffc00000, data 0x226e3b3/0x2437000, compress 0x0/0x0/0x0, omap 0x44c47, meta 0x606b3b9), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c8800 session 0x561088dffdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561088d29400 session 0x561088d00540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 68378624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c9400 session 0x56108b73b6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.958703995s of 10.064065933s, submitted: 89
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108695d800 session 0x56108cc2a000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d2fc00 session 0x561086dfb500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 68378624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108bd30c00 session 0x561087b15340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561087057c00 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d29400 session 0x56108bcfd500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2450039 data_alloc: 234881024 data_used: 14103955
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108695d800 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561087057c00 session 0x561087b15500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d2fc00 session 0x561089492c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 68354048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30c00 session 0x561086dfac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x5610896c8800 session 0x561088da2380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108695d800 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 68354048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x561087057c00 session 0x561086cbaa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x561088d2fc00 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f76cb000/0x0/0x4ffc00000, data 0x22b1bdc/0x247f000, compress 0x0/0x0/0x0, omap 0x4545f, meta 0x606aba1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30c00 session 0x56108944dc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158416896 unmapped: 68329472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30000 session 0x561087bace00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108695d800 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 68345856 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158433280 unmapped: 68313088 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561087057c00 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088d2fc00 session 0x561086e9afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2455665 data_alloc: 234881024 data_used: 14104533
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x56108bd30c00 session 0x56108944c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088793400 session 0x561088d4ee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088792c00 session 0x561088d00000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f76cb000/0x0/0x4ffc00000, data 0x22b35c3/0x247f000, compress 0x0/0x0/0x0, omap 0x45ac9, meta 0x606a537), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x56108695d800 session 0x561089492fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.161159515s of 10.308839798s, submitted: 95
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158449664 unmapped: 68296704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 326 ms_handle_reset con 0x561088d2fc00 session 0x561089493180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2460706 data_alloc: 234881024 data_used: 14105141
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f76c9000/0x0/0x4ffc00000, data 0x22b5032/0x2481000, compress 0x0/0x0/0x0, omap 0x45c3f, meta 0x606a3c1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158449664 unmapped: 68296704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 327 ms_handle_reset con 0x56108bd30c00 session 0x561088da2c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 68435968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x561086f18c00 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x56108695d800 session 0x561086d488c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 68435968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158326784 unmapped: 68419584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x561088792c00 session 0x561088d4e380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561088d2fc00 session 0x561086d48fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x5610899ea400 session 0x56108944d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561087057c00 session 0x561086dfac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561088793400 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158384128 unmapped: 68362240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 heartbeat osd_stat(store_statfs(0x4f76be000/0x0/0x4ffc00000, data 0x22ba378/0x248c000, compress 0x0/0x0/0x0, omap 0x46ec3, meta 0x606913d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x56108695d800 session 0x561088d00c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2477005 data_alloc: 234881024 data_used: 14361141
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x56108bd30c00 session 0x561086e9b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 68345856 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561088792c00 session 0x56108bcfda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561087057c00 session 0x561088da3880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158048256 unmapped: 68698112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561087057c00 session 0x561086cbae00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f76bb000/0x0/0x4ffc00000, data 0x22bbf24/0x2490000, compress 0x0/0x0/0x0, omap 0x4709d, meta 0x6068f63), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088792c00 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108695d800 session 0x561086ecf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088793400 session 0x561088da3dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 67575808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108bd30c00 session 0x561086d55880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561087057c00 session 0x561086d55500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108695d800 session 0x561086d49500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 67575808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088792c00 session 0x56108cc2a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.880310059s of 10.131997108s, submitted: 116
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 332 ms_handle_reset con 0x5610899ea400 session 0x561087b9ce00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 332 heartbeat osd_stat(store_statfs(0x4f76f5000/0x0/0x4ffc00000, data 0x227f863/0x2455000, compress 0x0/0x0/0x0, omap 0x475d9, meta 0x6068a27), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2487270 data_alloc: 234881024 data_used: 14105216
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088d2fc00 session 0x561088f46a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x56108695d800 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088793400 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x5610899eb800 session 0x5610897ecfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 67543040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088792c00 session 0x561086d49500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x56108695d800 session 0x561088c91180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561087057c00 session 0x561088d4fa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 68599808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x5610899eb800 session 0x56108947fa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088d2fc00 session 0x561086d55180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088793400 session 0x56108b73a540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x56108695d800 session 0x561087b41340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088792c00 session 0x56108944d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561087057c00 session 0x561087bac380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 68534272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088793400 session 0x56108b73a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 68509696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088d2fc00 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x56108695d800 session 0x561088dfe8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f76eb000/0x0/0x4ffc00000, data 0x2284d5e/0x245d000, compress 0x0/0x0/0x0, omap 0x47e85, meta 0x606817b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 68509696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x561087057c00 session 0x561086dfafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x561088793400 session 0x561087b40000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2509894 data_alloc: 234881024 data_used: 14762005
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899ea400 session 0x561086ecf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899eb800 session 0x561087b41180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 67919872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561088792c00 session 0x561088da2000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 67903488 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561087057c00 session 0x56108944ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561088793400 session 0x56108cc2b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899ea400 session 0x561086290fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 337 ms_handle_reset con 0x5610896d5c00 session 0x561088d00000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 67633152 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x5610899eb000 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x561087057c00 session 0x561086291180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x56108695d800 session 0x561088da3a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x561088792c00 session 0x561087b38380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f7678000/0x0/0x4ffc00000, data 0x22f62bf/0x24d2000, compress 0x0/0x0/0x0, omap 0x487e3, meta 0x606781d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 339 ms_handle_reset con 0x561088793400 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2521801 data_alloc: 234881024 data_used: 14762087
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 339 ms_handle_reset con 0x56108695d800 session 0x56108b73aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.879584312s of 12.438203812s, submitted: 176
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561087057c00 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561088793400 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561088792c00 session 0x561086dfb180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x5610899eb000 session 0x561087b14a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f7677000/0x0/0x4ffc00000, data 0x22f825b/0x24d5000, compress 0x0/0x0/0x0, omap 0x48913, meta 0x60676ed), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x56108695d800 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f766f000/0x0/0x4ffc00000, data 0x22fb94a/0x24db000, compress 0x0/0x0/0x0, omap 0x49d60, meta 0x60662a0), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561087057c00 session 0x56108b73aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2532107 data_alloc: 234881024 data_used: 14762087
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561088793400 session 0x561088d4fc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561088792c00 session 0x561088d4e380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x5610899ea400 session 0x561086d49880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561087057c00 session 0x561087b416c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561088793400 session 0x561089492000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 68468736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561088792c00 session 0x56108cc2bdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x56108695d800 session 0x56108947efc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x5610896d4c00 session 0x56108944d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 68468736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x561087057c00 session 0x561088da3880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 344 handle_osd_map epochs [344,345], i have 345, src has [1,345]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 345 ms_handle_reset con 0x561088792c00 session 0x56108947ee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 69001216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2546297 data_alloc: 234881024 data_used: 14911079
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x561088793400 session 0x561086d55a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f765e000/0x0/0x4ffc00000, data 0x2304353/0x24ec000, compress 0x0/0x0/0x0, omap 0x4aba5, meta 0x606545b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610896d4800 session 0x561087b38e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610896d4400 session 0x561087b41dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610899ee800 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.287918091s of 10.447829247s, submitted: 97
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x561087057c00 session 0x561086ecee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x56108695d800 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x5610896d4c00 session 0x561087bada40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561088792c00 session 0x561086cdf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561088793400 session 0x561088dffdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561087057c00 session 0x561088d00e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x5610899ee800 session 0x56108bcfd500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x56108695d800 session 0x561088e208c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f765a000/0x0/0x4ffc00000, data 0x2307a7f/0x24f0000, compress 0x0/0x0/0x0, omap 0x4b103, meta 0x6064efd), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x5610896d4800 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x56108695d800 session 0x56108b8c6c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x5610896d4c00 session 0x56108b73a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 68763648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x561087057c00 session 0x561086dfac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2553638 data_alloc: 234881024 data_used: 14912207
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x561088793400 session 0x56108944da40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f7656000/0x0/0x4ffc00000, data 0x2309627/0x24f2000, compress 0x0/0x0/0x0, omap 0x4b74c, meta 0x60648b4), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 68755456 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x5610896d4800 session 0x56108944ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 68739072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 350 ms_handle_reset con 0x56108695d800 session 0x56108b73a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158015488 unmapped: 68730880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 351 ms_handle_reset con 0x5610896d4800 session 0x561088d016c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 352 ms_handle_reset con 0x561088793400 session 0x561088f47a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158031872 unmapped: 68714496 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 352 ms_handle_reset con 0x561087057c00 session 0x561086e9b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 352 handle_osd_map epochs [352,353], i have 353, src has [1,353]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 353 ms_handle_reset con 0x5610899ee800 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 353 ms_handle_reset con 0x5610896d4c00 session 0x561088f46000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158040064 unmapped: 68706304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2562466 data_alloc: 234881024 data_used: 14764413
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158064640 unmapped: 68681728 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561087057c00 session 0x561088dffdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x56108695d800 session 0x561088d4fc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x5610896d4800 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f76b7000/0x0/0x4ffc00000, data 0x22a46c1/0x2493000, compress 0x0/0x0/0x0, omap 0x4c222, meta 0x6063dde), peers [0,1] op hist [0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561088793400 session 0x561088da3dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561087057c00 session 0x5610897ecfc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610896d4800 session 0x561087b9c8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157212672 unmapped: 69533696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x56108695d800 session 0x561087bacc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610896d4c00 session 0x561088d00000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.121862411s of 10.008955956s, submitted: 191
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610899efc00 session 0x561089492fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f76b4000/0x0/0x4ffc00000, data 0x22a7a34/0x2498000, compress 0x0/0x0/0x0, omap 0x4c6ac, meta 0x6063954), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 69468160 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x56108695d800 session 0x561088c90c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x561087057c00 session 0x56108bcfddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x5610899ee400 session 0x561088dfe1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x5610896d4800 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2574925 data_alloc: 234881024 data_used: 14766881
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x5610896d4c00 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x56108695d800 session 0x56108b73aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x561087057c00 session 0x561088d4e000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 359 ms_handle_reset con 0x5610896d4800 session 0x561088779340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 69410816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f6e0f000/0x0/0x4ffc00000, data 0x2b41a4f/0x2d39000, compress 0x0/0x0/0x0, omap 0x4cf6a, meta 0x6063096), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f6e0f000/0x0/0x4ffc00000, data 0x2b41a4f/0x2d39000, compress 0x0/0x0/0x0, omap 0x4cf6a, meta 0x6063096), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 69410816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 359 handle_osd_map epochs [359,360], i have 360, src has [1,360]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157351936 unmapped: 69394432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899ee400 session 0x56108b73a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899eec00 session 0x5610897ed6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f6e0e000/0x0/0x4ffc00000, data 0x2b43562/0x2d3c000, compress 0x0/0x0/0x0, omap 0x4d3b3, meta 0x6062c4d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2633424 data_alloc: 234881024 data_used: 14767494
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x56108695d800 session 0x561086d55180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.341822624s of 10.002419472s, submitted: 134
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610896d4800 session 0x561086ecf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157319168 unmapped: 69427200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899ef400 session 0x56108cc2b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635905 data_alloc: 234881024 data_used: 14767592
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f6e11000/0x0/0x4ffc00000, data 0x2b43552/0x2d3b000, compress 0x0/0x0/0x0, omap 0x4d512, meta 0x6062aee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 361 ms_handle_reset con 0x5610891a4c00 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 361 ms_handle_reset con 0x561089a47c00 session 0x561088d4f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 362 ms_handle_reset con 0x5610899ee400 session 0x56108947f180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 362 handle_osd_map epochs [362,363], i have 363, src has [1,363]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x56108695d800 session 0x561086ecee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x561087057c00 session 0x561086d55500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648546 data_alloc: 234881024 data_used: 14768177
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x561089a47800 session 0x561086d481c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x5610899ef400 session 0x561087badc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 heartbeat osd_stat(store_statfs(0x4f6e05000/0x0/0x4ffc00000, data 0x2b48815/0x2d47000, compress 0x0/0x0/0x0, omap 0x4de23, meta 0x60621dd), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561089a47800 session 0x561088d4f180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x56108695d800 session 0x561086d488c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 69402624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561089a47c00 session 0x561088f46000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561087057c00 session 0x561087b14000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.017593384s of 10.174485207s, submitted: 80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x5610899ee400 session 0x561086d55c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 365 ms_handle_reset con 0x5610896d4800 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161177600 unmapped: 65568768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f6dfa000/0x0/0x4ffc00000, data 0x2b4c034/0x2d50000, compress 0x0/0x0/0x0, omap 0x4e3fa, meta 0x6061c06), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x5610899ef400 session 0x56108947f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x561089a47800 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719695 data_alloc: 234881024 data_used: 23751676
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x561089a47c00 session 0x561087b9d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 367 ms_handle_reset con 0x5610896d4800 session 0x5610897ec000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 367 ms_handle_reset con 0x5610899ee400 session 0x56108bcfd340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 65503232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 368 ms_handle_reset con 0x561089a47400 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 368 ms_handle_reset con 0x5610899ef400 session 0x56108b73b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161251328 unmapped: 65495040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 368 handle_osd_map epochs [368,369], i have 369, src has [1,369]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 369 ms_handle_reset con 0x561089a47800 session 0x561087b41180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161398784 unmapped: 65347584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 369 ms_handle_reset con 0x561089a46000 session 0x561087b15500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2727282 data_alloc: 234881024 data_used: 23752532
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f6df0000/0x0/0x4ffc00000, data 0x2b53069/0x2d5a000, compress 0x0/0x0/0x0, omap 0x4ef1c, meta 0x60610e4), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161398784 unmapped: 65347584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 370 ms_handle_reset con 0x5610896d4800 session 0x56108b73b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6df0000/0x0/0x4ffc00000, data 0x2b53069/0x2d5a000, compress 0x0/0x0/0x0, omap 0x4ef1c, meta 0x60610e4), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 65339392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 65339392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.410625458s of 10.643949509s, submitted: 91
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 370 ms_handle_reset con 0x5610899ee400 session 0x5610897edc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 58990592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6a83000/0x0/0x4ffc00000, data 0x2ec0e54/0x30c9000, compress 0x0/0x0/0x0, omap 0x4f04b, meta 0x6060fb5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 58834944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2777598 data_alloc: 234881024 data_used: 24203092
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 371 ms_handle_reset con 0x561089a47400 session 0x561086cdea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 372 ms_handle_reset con 0x561089a46c00 session 0x561087b40700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 58703872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x561089a47000 session 0x561086cdee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 58703872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x5610899ef400 session 0x561088f476c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f6749000/0x0/0x4ffc00000, data 0x31f5527/0x3401000, compress 0x0/0x0/0x0, omap 0x4f5e3, meta 0x6060a1d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x5610896d4800 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 58695680 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x5610899ee400 session 0x56108bcfd500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x5610896c9c00 session 0x561088f46380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 58376192 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x561089a47400 session 0x561088f47a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 375 ms_handle_reset con 0x561089a46400 session 0x561086cde8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 375 ms_handle_reset con 0x561089a46000 session 0x561089492e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168386560 unmapped: 58359808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2794669 data_alloc: 234881024 data_used: 24203408
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168394752 unmapped: 58351616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610896c9c00 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f673c000/0x0/0x4ffc00000, data 0x31fc59d/0x340e000, compress 0x0/0x0/0x0, omap 0x5011f, meta 0x605fee1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f673c000/0x0/0x4ffc00000, data 0x31fc59d/0x340e000, compress 0x0/0x0/0x0, omap 0x5011f, meta 0x605fee1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.340101242s of 11.138490677s, submitted: 175
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610899ef400 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610896d4800 session 0x561086dfbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610899ee400 session 0x561087b38e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 377 ms_handle_reset con 0x5610896c9c00 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2804968 data_alloc: 234881024 data_used: 24204646
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169549824 unmapped: 57196544 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 378 ms_handle_reset con 0x561089a46000 session 0x561086cdf6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 378 ms_handle_reset con 0x561089a46400 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169615360 unmapped: 57131008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 379 ms_handle_reset con 0x5610896c9c00 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169623552 unmapped: 57122816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6730000/0x0/0x4ffc00000, data 0x3201b3a/0x341a000, compress 0x0/0x0/0x0, omap 0x50a34, meta 0x605f5cc), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 379 ms_handle_reset con 0x5610896d4800 session 0x561088f476c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169623552 unmapped: 57122816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6730000/0x0/0x4ffc00000, data 0x3201b3a/0x341a000, compress 0x0/0x0/0x0, omap 0x50a34, meta 0x605f5cc), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 57106432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x561089a47400 session 0x56108b73b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x561089a46000 session 0x561088c90fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x5610896c9000 session 0x561089492000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2815534 data_alloc: 234881024 data_used: 24206185
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f672f000/0x0/0x4ffc00000, data 0x320361d/0x341d000, compress 0x0/0x0/0x0, omap 0x50b67, meta 0x605f499), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169648128 unmapped: 57098240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x561089a47000 session 0x561087b39500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x5610896c9c00 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 57090048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x5610899ee400 session 0x56108944c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6729000/0x0/0x4ffc00000, data 0x3205247/0x3421000, compress 0x0/0x0/0x0, omap 0x50fdd, meta 0x605f023), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 57090048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6729000/0x0/0x4ffc00000, data 0x3205247/0x3421000, compress 0x0/0x0/0x0, omap 0x50fdd, meta 0x605f023), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 382 ms_handle_reset con 0x5610896d4800 session 0x561087b9d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 57065472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 382 ms_handle_reset con 0x561089a46000 session 0x561086cde8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.642509460s of 10.354652405s, submitted: 105
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 383 ms_handle_reset con 0x5610896c9c00 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 57040896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x561089a47400 session 0x561086ecf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610899ee400 session 0x561086cba8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610896c8c00 session 0x56108947e000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833691 data_alloc: 234881024 data_used: 24206583
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610896d4800 session 0x561086d54540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 56885248 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 ms_handle_reset con 0x5610896c9800 session 0x561086ecfc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 ms_handle_reset con 0x561089a47000 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169869312 unmapped: 56877056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f671f000/0x0/0x4ffc00000, data 0x320c173/0x342b000, compress 0x0/0x0/0x0, omap 0x51b3b, meta 0x605e4c5), peers [0,1] op hist [0,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171016192 unmapped: 55730176 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896c8c00 session 0x561086d49500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896c9c00 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170844160 unmapped: 55902208 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896d4800 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610896c9800 session 0x561086d55880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610896c8c00 session 0x561088f47880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610899ee400 session 0x56108bcfda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 55533568 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2842749 data_alloc: 234881024 data_used: 24210388
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171278336 unmapped: 55468032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f66ee000/0x0/0x4ffc00000, data 0x323b66e/0x345a000, compress 0x0/0x0/0x0, omap 0x5231d, meta 0x605dce3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171286528 unmapped: 55459840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f66ee000/0x0/0x4ffc00000, data 0x323b66e/0x345a000, compress 0x0/0x0/0x0, omap 0x5231d, meta 0x605dce3), peers [0,1] op hist [0,0,0,0,1,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 389 ms_handle_reset con 0x561089a47400 session 0x5610897eda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 389 ms_handle_reset con 0x5610896c8000 session 0x561088f47180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 55435264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 55435264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845989 data_alloc: 234881024 data_used: 24262514
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.512095451s of 11.085718155s, submitted: 186
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f66eb000/0x0/0x4ffc00000, data 0x323ed67/0x345f000, compress 0x0/0x0/0x0, omap 0x529d2, meta 0x605d62e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 390 ms_handle_reset con 0x5610896c8c00 session 0x56108947e1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 391 ms_handle_reset con 0x5610899ee400 session 0x561086cde380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871510 data_alloc: 234881024 data_used: 24905586
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f6632000/0x0/0x4ffc00000, data 0x32f683a/0x3518000, compress 0x0/0x0/0x0, omap 0x52e65, meta 0x605d19b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 392 ms_handle_reset con 0x561089a47400 session 0x56108944d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 393 ms_handle_reset con 0x561088d25c00 session 0x561089493dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 393 ms_handle_reset con 0x5610896c9800 session 0x5610897ed6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 394 ms_handle_reset con 0x561088d25c00 session 0x561086cdfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f6624000/0x0/0x4ffc00000, data 0x32fbbb6/0x3521000, compress 0x0/0x0/0x0, omap 0x53560, meta 0x605caa0), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 396 ms_handle_reset con 0x5610896c8c00 session 0x561086ecf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 396 ms_handle_reset con 0x5610899ee400 session 0x561088f47a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885857 data_alloc: 234881024 data_used: 24910295
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f6621000/0x0/0x4ffc00000, data 0x32ff25d/0x3527000, compress 0x0/0x0/0x0, omap 0x54a2d, meta 0x605b5d3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885857 data_alloc: 234881024 data_used: 24910295
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.559764862s of 16.016971588s, submitted: 118
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f6625000/0x0/0x4ffc00000, data 0x32ff25d/0x3527000, compress 0x0/0x0/0x0, omap 0x54a2d, meta 0x605b5d3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f6620000/0x0/0x4ffc00000, data 0x3300d14/0x352a000, compress 0x0/0x0/0x0, omap 0x54b61, meta 0x605b49f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 397 ms_handle_reset con 0x561089a47400 session 0x56108947f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2887447 data_alloc: 234881024 data_used: 24910295
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f6620000/0x0/0x4ffc00000, data 0x3300d14/0x352a000, compress 0x0/0x0/0x0, omap 0x54b61, meta 0x605b49f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171474944 unmapped: 55271424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 398 ms_handle_reset con 0x5610891a4000 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 398 ms_handle_reset con 0x561088d25c00 session 0x561086cba8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171491328 unmapped: 55255040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610899ee400 session 0x561088dfee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610896c8c00 session 0x561086cba540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171360256 unmapped: 55386112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f661c000/0x0/0x4ffc00000, data 0x33044a0/0x3530000, compress 0x0/0x0/0x0, omap 0x55133, meta 0x605aecd), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2893368 data_alloc: 234881024 data_used: 24910908
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x561089a47400 session 0x561086ecf500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.551637650s of 10.607804298s, submitted: 44
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610891a9800 session 0x561086cdee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 400 ms_handle_reset con 0x561088d25c00 session 0x56108947f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172449792 unmapped: 54296576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f660d000/0x0/0x4ffc00000, data 0x33096e3/0x3539000, compress 0x0/0x0/0x0, omap 0x5583f, meta 0x605a7c1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172457984 unmapped: 54288384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610896c9c00 session 0x561086d481c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x561089a47000 session 0x561088da36c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f660d000/0x0/0x4ffc00000, data 0x33096e3/0x3539000, compress 0x0/0x0/0x0, omap 0x5583f, meta 0x605a7c1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902299 data_alloc: 234881024 data_used: 24910908
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610896c8c00 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172482560 unmapped: 54263808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172482560 unmapped: 54263808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610899ee400 session 0x561087baca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x561088d25c00 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172515328 unmapped: 54231040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 403 ms_handle_reset con 0x5610896c8c00 session 0x561087b9c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 54222848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 403 ms_handle_reset con 0x561089a47000 session 0x56108944c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x5610896c9c00 session 0x561088e20000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892571 data_alloc: 234881024 data_used: 24778060
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f66e9000/0x0/0x4ffc00000, data 0x322cebb/0x345f000, compress 0x0/0x0/0x0, omap 0x5a56e, meta 0x6055a92), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059605598s of 10.246625900s, submitted: 110
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x561087057c00 session 0x561088e20540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x56108695d800 session 0x561088d4e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x561088d25c00 session 0x561088f476c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x5610896c8c00 session 0x561088da2c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172548096 unmapped: 54198272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 54190080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x5610896c9c00 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 54181888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x561089a47000 session 0x56108bcfd340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x561088d25c00 session 0x561087b14e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2895321 data_alloc: 234881024 data_used: 24778025
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x5610896c8c00 session 0x561088dff500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x56108695d800 session 0x561088d4ee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166330368 unmapped: 60416000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f7615000/0x0/0x4ffc00000, data 0x2300688/0x2533000, compress 0x0/0x0/0x0, omap 0x5ac4d, meta 0x60553b3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x5610896c9c00 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561089a47400 session 0x561088f47180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166338560 unmapped: 60407808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x56108695d800 session 0x561088f476c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166338560 unmapped: 60407808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561088d25c00 session 0x561086d55500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561088d27c00 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x5610896c9c00 session 0x561086cdf880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166379520 unmapped: 60366848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 407 ms_handle_reset con 0x5610896c8c00 session 0x561088dfe540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 408 ms_handle_reset con 0x561088d25c00 session 0x561086cdf340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166379520 unmapped: 60366848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2764811 data_alloc: 234881024 data_used: 14780697
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 408 ms_handle_reset con 0x561088d27c00 session 0x561087b401c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x56108695d800 session 0x561087bad6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x5610896c9c00 session 0x561088c91500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x561088d26000 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f760f000/0x0/0x4ffc00000, data 0x2305955/0x253b000, compress 0x0/0x0/0x0, omap 0x5b58c, meta 0x6054a74), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x561088d25c00 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.020929337s of 10.765624046s, submitted: 189
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x56108695d800 session 0x56108944d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x561088d26000 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x561088d27c00 session 0x561088da2000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166420480 unmapped: 60325888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x5610896c9c00 session 0x561088d01500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x56108695d800 session 0x561088da28c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d25c00 session 0x561086291180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d26000 session 0x561088c91dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d27c00 session 0x561086dfb6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2773857 data_alloc: 234881024 data_used: 14780969
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x5610891adc00 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x56108695d800 session 0x561088c90fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d25c00 session 0x56108947e000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d26000 session 0x561086ecf6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d27c00 session 0x561086cbac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7606000/0x0/0x4ffc00000, data 0x230ac3e/0x2544000, compress 0x0/0x0/0x0, omap 0x5c2f6, meta 0x6053d0a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7606000/0x0/0x4ffc00000, data 0x230ac3e/0x2544000, compress 0x0/0x0/0x0, omap 0x5c2f6, meta 0x6053d0a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x56108695a400 session 0x561086d55340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x5610891ab400 session 0x561086e9a700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x56108695a400 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d25c00 session 0x561086d541c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d26000 session 0x561088da2e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 60907520 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x56108695d800 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x56108695a400 session 0x561086cbafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x561088d25c00 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2798118 data_alloc: 234881024 data_used: 14781269
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f73d5000/0x0/0x4ffc00000, data 0x253a82e/0x2775000, compress 0x0/0x0/0x0, omap 0x5ca97, meta 0x6053569), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x561088d26000 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x5610891ab400 session 0x561086d541c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.274993896s of 10.770775795s, submitted: 171
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 60817408 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 415 ms_handle_reset con 0x561088d27c00 session 0x561086d55340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 60817408 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 415 ms_handle_reset con 0x56108695a400 session 0x561088da2540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808176 data_alloc: 234881024 data_used: 14781285
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x561088d25c00 session 0x561088dfe540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f73cc000/0x0/0x4ffc00000, data 0x253fa1f/0x277e000, compress 0x0/0x0/0x0, omap 0x5d554, meta 0x6052aac), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x561088d26000 session 0x561088d00e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x5610891ab400 session 0x561088c91500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x561086f19400 session 0x56108bcfcc40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x56108695a400 session 0x561086291180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x561088d25c00 session 0x561086dfb6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812235 data_alloc: 234881024 data_used: 14782224
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x5610891ab400 session 0x561086d54540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f73c9000/0x0/0x4ffc00000, data 0x2541661/0x2781000, compress 0x0/0x0/0x0, omap 0x5d6f4, meta 0x605290c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561088d3f400 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561088d26000 session 0x561088d4f500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x56108695a400 session 0x561089493c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165945344 unmapped: 60801024 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x5610891ab400 session 0x561086e9a380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.983626366s of 10.283070564s, submitted: 60
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561086f03c00 session 0x561088d01880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f73c6000/0x0/0x4ffc00000, data 0x25431ff/0x2784000, compress 0x0/0x0/0x0, omap 0x5d81e, meta 0x60527e2), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 420 ms_handle_reset con 0x561088d2e800 session 0x561088e20700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836700 data_alloc: 234881024 data_used: 16977171
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165978112 unmapped: 60768256 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 420 ms_handle_reset con 0x561086eca800 session 0x561088f47880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 60792832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x561086eca800 session 0x561087b15dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 60792832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x56108695a400 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x561086f03c00 session 0x561086d49880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 421 handle_osd_map epochs [421,422], i have 422, src has [1,422]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f73b8000/0x0/0x4ffc00000, data 0x2549edd/0x2790000, compress 0x0/0x0/0x0, omap 0x5e905, meta 0x60516fb), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2841652 data_alloc: 234881024 data_used: 16977171
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166600704 unmapped: 60145664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 423 ms_handle_reset con 0x561088d2e800 session 0x561088d00380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 56213504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f6bb7000/0x0/0x4ffc00000, data 0x2d42af7/0x2f8b000, compress 0x0/0x0/0x0, omap 0x5ea31, meta 0x60515cf), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.865922928s of 10.292457581s, submitted: 117
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x5610891ab400 session 0x56108947e000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899028 data_alloc: 234881024 data_used: 17101173
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x5610891ab400 session 0x561087b15180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x56108695a400 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 425 ms_handle_reset con 0x561086eca800 session 0x561088e21500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f6bb4000/0x0/0x4ffc00000, data 0x2d4c104/0x2f96000, compress 0x0/0x0/0x0, omap 0x5f0c7, meta 0x6050f39), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900308 data_alloc: 234881024 data_used: 17105443
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f6bb4000/0x0/0x4ffc00000, data 0x2d4c104/0x2f96000, compress 0x0/0x0/0x0, omap 0x5f0c7, meta 0x6050f39), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583072662s of 10.026338577s, submitted: 48
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561088d25c00 session 0x5610894921c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561088d3f400 session 0x56108bcfce00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb1000/0x0/0x4ffc00000, data 0x2d4db83/0x2f99000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x56108695a400 session 0x561088dfee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 57810944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891a9c00 session 0x561087b40e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086e9e800 session 0x56108b8c6380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 57810944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086f19800 session 0x56108b8c68c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.400606155s of 17.484048843s, submitted: 4
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891ab800 session 0x56108b73b880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908056 data_alloc: 234881024 data_used: 17142307
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908056 data_alloc: 234881024 data_used: 17142307
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.337499619s of 11.355749130s, submitted: 9
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f399, meta 0x6050c67), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917452 data_alloc: 234881024 data_used: 17629731
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086f19800 session 0x561087b9c1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891a9c00 session 0x56108944d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917372 data_alloc: 234881024 data_used: 17626659
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.396843910s of 12.426069260s, submitted: 10
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917084 data_alloc: 234881024 data_used: 17626659
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169353216 unmapped: 57393152 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 57384960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 426 handle_osd_map epochs [426,427], i have 427, src has [1,427]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x5610891a2400 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 57311232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930167 data_alloc: 234881024 data_used: 19174947
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 57040896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561088d2ac00 session 0x561087b9da40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561088d2e400 session 0x561088f46a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561086e9e400 session 0x56108bcfd340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 56991744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 56991744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 56983552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x56108b8c6000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7d000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169803776 unmapped: 56942592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7d000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949118 data_alloc: 234881024 data_used: 19179043
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169803776 unmapped: 56942592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.832711220s of 11.049080849s, submitted: 47
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088e20000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x56108bcfc540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7f000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [0,0,0,0,3,0,1])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137618 data_alloc: 234881024 data_used: 19179043
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a2400 session 0x561087b9ca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086e9e400 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088f46000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x561088da3500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a9c00 session 0x561086d481c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4b3e000/0x0/0x4ffc00000, data 0x4e6b3a0/0x500e000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170311680 unmapped: 56434688 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124766 data_alloc: 234881024 data_used: 19206707
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x5610897ed500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088e21500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x56108bcfce00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a3000 session 0x56108bcfda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d06400 session 0x56108b8c7a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 56410112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 56410112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163050 data_alloc: 234881024 data_used: 19206707
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561086e9ba40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088d4f180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169582592 unmapped: 57163776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086958000 session 0x561088c90fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169713664 unmapped: 57032704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19000 session 0x56108b8c7dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.457416534s of 17.052862167s, submitted: 60
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087baac00 session 0x56108944d880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169738240 unmapped: 57008128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f453e000/0x0/0x4ffc00000, data 0x54693d3/0x560e000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169795584 unmapped: 56950784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205337 data_alloc: 234881024 data_used: 20534915
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169795584 unmapped: 56950784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f453e000/0x0/0x4ffc00000, data 0x54693d3/0x560e000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219277 data_alloc: 234881024 data_used: 21404291
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f446c000/0x0/0x4ffc00000, data 0x553b3d3/0x56e0000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.492341995s of 10.538371086s, submitted: 18
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 52559872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267287 data_alloc: 234881024 data_used: 21631619
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 52559872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 39886848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f390b000/0x0/0x4ffc00000, data 0x5d4c3d3/0x5ef1000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a9c00 session 0x561086d55180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086e9e400 session 0x56108944c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f390b000/0x0/0x4ffc00000, data 0x5d4c3d3/0x5ef1000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 42696704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561087b9d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 42213376 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 42213376 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338255 data_alloc: 234881024 data_used: 23040643
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184541184 unmapped: 42205184 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561087b40380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f31fd000/0x0/0x4ffc00000, data 0x677a3d3/0x691f000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087bab400 session 0x561086ece700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087bab400 session 0x561088f46700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 428 handle_osd_map epochs [428,429], i have 429, src has [1,429]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e400 session 0x561087bad500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f3302000/0x0/0x4ffc00000, data 0x66a63c3/0x684a000, compress 0x0/0x0/0x0, omap 0x605ca, meta 0x604fa36), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306134224s of 11.053565025s, submitted: 255
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561086290fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e800 session 0x561086d48fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 43048960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086f19800 session 0x561088d01880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311260 data_alloc: 234881024 data_used: 22084211
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f332b000/0x0/0x4ffc00000, data 0x65cdfa3/0x6821000, compress 0x0/0x0/0x0, omap 0x60ced, meta 0x604f313), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086958000 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086f19000 session 0x561086cba540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183869440 unmapped: 42876928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e400 session 0x561087b39340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561088d4fc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183877632 unmapped: 42868736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3304367 data_alloc: 234881024 data_used: 21987971
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e800 session 0x561087b41180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086958000 session 0x561086cbbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179437568 unmapped: 47308800 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561088d2e400 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x5610891a3000 session 0x5610897ec700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f3d70000/0x0/0x4ffc00000, data 0x5b8af0e/0x5ddb000, compress 0x0/0x0/0x0, omap 0x610d6, meta 0x604ef2a), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561086d55880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 51191808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 51191808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175603712 unmapped: 51142656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175603712 unmapped: 51142656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157254219s of 10.557867050s, submitted: 120
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086e9e400 session 0x561088f46c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086958000 session 0x56108947fa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105032 data_alloc: 234881024 data_used: 13201390
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b82000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x612b7, meta 0x604ed49), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x56108695a400 session 0x561088f476c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561088d2e400 session 0x561088da3500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104440 data_alloc: 234881024 data_used: 13201489
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104440 data_alloc: 234881024 data_used: 13201489
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.654470444s of 10.694202423s, submitted: 26
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x5610891a3000 session 0x56108944d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086f19000 session 0x561088dfec40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086958000 session 0x561088da36c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 48308224 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178446336 unmapped: 48300032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x56108bcfc700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f4b7c000/0x0/0x4ffc00000, data 0x4d795ed/0x4fce000, compress 0x0/0x0/0x0, omap 0x61c5b, meta 0x604e3a5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178446336 unmapped: 48300032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2e400 session 0x56108bcfd340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610891a3000 session 0x5610897eddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561087bab400 session 0x561088d4ea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086958000 session 0x561088c91500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x561088c91340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123416 data_alloc: 234881024 data_used: 16871603
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2ac00 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610891a9c00 session 0x5610897ec8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610899edc00 session 0x561088e20e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086fbec00 session 0x561086cbafc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f4b3d000/0x0/0x4ffc00000, data 0x4db964f/0x500f000, compress 0x0/0x0/0x0, omap 0x61c5b, meta 0x604e3a5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086958000 session 0x561086e9b340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125646 data_alloc: 234881024 data_used: 16958131
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x561086cbbc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.913650513s of 11.028878212s, submitted: 42
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2ac00 session 0x56108947f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 48013312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x5610891a9c00 session 0x561088f46700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 48013312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x5610891a9c00 session 0x561087baca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x561086958000 session 0x56108b8c7500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4b3c000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x62318, meta 0x604dce8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126570 data_alloc: 234881024 data_used: 16958033
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4b3c000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x62318, meta 0x604dce8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178765824 unmapped: 47980544 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x56108695a400 session 0x561086e9ac40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 432 handle_osd_map epochs [432,433], i have 433, src has [1,433]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f4b3e000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x6252b, meta 0x604dad5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 47923200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 433 ms_handle_reset con 0x561088d2ac00 session 0x56108bcfda40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f4b39000/0x0/0x4ffc00000, data 0x4dbcb88/0x5011000, compress 0x0/0x0/0x0, omap 0x6267a, meta 0x604d986), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187219968 unmapped: 39526400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561088d28400 session 0x561088c90fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f4b39000/0x0/0x4ffc00000, data 0x4dbcb88/0x5011000, compress 0x0/0x0/0x0, omap 0x6267a, meta 0x604d986), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561086fbec00 session 0x561086ece000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331166 data_alloc: 234881024 data_used: 18956881
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561086958000 session 0x561086dfa540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.596965790s of 11.039932251s, submitted: 86
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x56108695a400 session 0x56108bcfc540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180011008 unmapped: 46735360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d28400 session 0x561089492700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f2978000/0x0/0x4ffc00000, data 0x6f79332/0x71d2000, compress 0x0/0x0/0x0, omap 0x62da1, meta 0x604d25f), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180019200 unmapped: 46727168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d2ac00 session 0x56108944c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338362 data_alloc: 234881024 data_used: 18953825
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180199424 unmapped: 46546944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d2e400 session 0x561086d49880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x5610891a3000 session 0x561088da2c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561086958000 session 0x561088f46000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x56108695a400 session 0x561086cde380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 46530560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561086fbec00 session 0x561088da36c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 46497792 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561086958000 session 0x561087bad500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x56108695a400 session 0x561086e9aa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561088d2e400 session 0x561088da2e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 46473216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x5610891a3000 session 0x561087b40540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d28400 session 0x561087b41500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561086958000 session 0x561088da3500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 46448640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x56108695a400 session 0x5610897ed6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d2e400 session 0x561087b9ddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351919 data_alloc: 234881024 data_used: 20944465
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x5610891a3000 session 0x561087b15500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f2972000/0x0/0x4ffc00000, data 0x6f7caa0/0x71d6000, compress 0x0/0x0/0x0, omap 0x6359d, meta 0x604ca63), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d27400 session 0x56108b8c76c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d27000 session 0x561087b39340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117007 data_alloc: 234881024 data_used: 18469990
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.358721733s of 15.690736771s, submitted: 139
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x56108695a400 session 0x561086d541c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 437 handle_osd_map epochs [437,438], i have 438, src has [1,438]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x561088d2e400 session 0x561086d488c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x5610891ad800 session 0x561088dffc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175915008 unmapped: 50831360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x5610896c7800 session 0x561086cdf180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x5610891a3000 session 0x561086cba8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122139 data_alloc: 234881024 data_used: 18473988
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f53bb000/0x0/0x4ffc00000, data 0x45321ae/0x478d000, compress 0x0/0x0/0x0, omap 0x63aa8, meta 0x604c558), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x5610896c7800 session 0x561086d55180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 43466752 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x56108695a400 session 0x56108b8c76c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 440 ms_handle_reset con 0x561088d27000 session 0x561088da3500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 39870464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 440 ms_handle_reset con 0x561088d2e400 session 0x561088da2700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 440 heartbeat osd_stat(store_statfs(0x4f3907000/0x0/0x4ffc00000, data 0x4a36d4a/0x4c93000, compress 0x0/0x0/0x0, omap 0x63bbe, meta 0x71ec442), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 440 handle_osd_map epochs [441,441], i have 441, src has [1,441]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 39567360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199074 data_alloc: 234881024 data_used: 19055620
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 39567360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 441 ms_handle_reset con 0x56108695a400 session 0x5610897eca80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 39493632 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 39493632 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f36ae000/0x0/0x4ffc00000, data 0x4c8a959/0x4eea000, compress 0x0/0x0/0x0, omap 0x64128, meta 0x71ebed8), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 441 handle_osd_map epochs [442,442], i have 442, src has [1,442]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.086682320s of 10.323908806s, submitted: 84
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 442 ms_handle_reset con 0x561088d27000 session 0x5610897ec1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181136 data_alloc: 234881024 data_used: 19059716
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 443 ms_handle_reset con 0x5610896c7800 session 0x561088da2000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x5610891a3000 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f3ab7000/0x0/0x4ffc00000, data 0x4c8e163/0x4ef1000, compress 0x0/0x0/0x0, omap 0x64356, meta 0x71ebcaa), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x561088d2e400 session 0x561088d4e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 45015040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x5610891a9c00 session 0x561088d008c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x561086958000 session 0x561088f47c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 44998656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x56108695a400 session 0x561087b15340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 445 ms_handle_reset con 0x561088d2e400 session 0x56108cc2b180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 44998656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f3ab1000/0x0/0x4ffc00000, data 0x4c9190b/0x4ef7000, compress 0x0/0x0/0x0, omap 0x649dc, meta 0x71eb624), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191814 data_alloc: 234881024 data_used: 19084292
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 446 ms_handle_reset con 0x561088d27000 session 0x56108d0a4fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f3ab1000/0x0/0x4ffc00000, data 0x4c934b5/0x4ef9000, compress 0x0/0x0/0x0, omap 0x64af4, meta 0x71eb50c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 447 ms_handle_reset con 0x561086958000 session 0x5610894936c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 44965888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x56108695a400 session 0x561086cde8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x561088d2e400 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x5610891a9c00 session 0x561086d55dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.562074661s of 10.011097908s, submitted: 148
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x5610891a3000 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x561086958000 session 0x561086cbbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 44777472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x56108695a400 session 0x56108d0a4000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f3aaa000/0x0/0x4ffc00000, data 0x4c96cf7/0x4f00000, compress 0x0/0x0/0x0, omap 0x6517f, meta 0x71eae81), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 450 ms_handle_reset con 0x5610896c7800 session 0x561089492e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 44769280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376517 data_alloc: 234881024 data_used: 19084806
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 44769280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181993472 unmapped: 44752896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 44736512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 451 heartbeat osd_stat(store_statfs(0x4f1adf000/0x0/0x4ffc00000, data 0x6c5bfd5/0x6ec9000, compress 0x0/0x0/0x0, omap 0x65d69, meta 0x71ea297), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 43687936 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382177 data_alloc: 234881024 data_used: 19085663
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 ms_handle_reset con 0x561088d2e400 session 0x5610897ec8c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196254730s of 10.522126198s, submitted: 58
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x5610891a9c00 session 0x561086ecfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x561086958000 session 0x561089493340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x56108695a400 session 0x561088dff340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 43671552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x561088d2e400 session 0x56108944d500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388530 data_alloc: 234881024 data_used: 19085679
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3468914 data_alloc: 251658240 data_used: 32346065
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 453 handle_osd_map epochs [453,454], i have 454, src has [1,454]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 454 ms_handle_reset con 0x561088d2dc00 session 0x561086290fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f1ad4000/0x0/0x4ffc00000, data 0x6c62d0f/0x6ed6000, compress 0x0/0x0/0x0, omap 0x66d5f, meta 0x71e92a1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.200019836s of 11.241814613s, submitted: 38
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 455 ms_handle_reset con 0x561087baa000 session 0x561088f46c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3476024 data_alloc: 251658240 data_used: 32346065
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f1ad4000/0x0/0x4ffc00000, data 0x6c62d0f/0x6ed6000, compress 0x0/0x0/0x0, omap 0x66d5f, meta 0x71e92a1), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 456 ms_handle_reset con 0x561086958000 session 0x56108944d340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 456 ms_handle_reset con 0x56108695a400 session 0x561088dff180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 22429696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 456 handle_osd_map epochs [456,457], i have 457, src has [1,457]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 457 ms_handle_reset con 0x561087baa000 session 0x561088c91a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 20643840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 19472384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 457 ms_handle_reset con 0x561088d2dc00 session 0x561088d00000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614725 data_alloc: 251658240 data_used: 34719697
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 19423232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 457 heartbeat osd_stat(store_statfs(0x4ef54f000/0x0/0x4ffc00000, data 0x7e354b7/0x80ab000, compress 0x0/0x0/0x0, omap 0x66f93, meta 0x838906d), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207413248 unmapped: 19333120 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 19275776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 19275776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 459 ms_handle_reset con 0x561088d2e400 session 0x561088dfea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 24207360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3611079 data_alloc: 251658240 data_used: 34723793
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x561086958000 session 0x561086d49c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ef754000/0x0/0x4ffc00000, data 0x7e3a6fa/0x80b4000, compress 0x0/0x0/0x0, omap 0x6778d, meta 0x8388873), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202555392 unmapped: 24190976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926978111s of 11.327394485s, submitted: 171
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x56108695a400 session 0x561088d00540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202555392 unmapped: 24190976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x5610896c7800 session 0x561086dfaa80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 24174592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x5610891ad800 session 0x561088dfe700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x561088d2dc00 session 0x56108b73afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 460 handle_osd_map epochs [460,461], i have 461, src has [1,461]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202596352 unmapped: 24150016 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 461 ms_handle_reset con 0x561087baa000 session 0x561089492c40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 461 heartbeat osd_stat(store_statfs(0x4ef753000/0x0/0x4ffc00000, data 0x7e3c304/0x80b7000, compress 0x0/0x0/0x0, omap 0x67e21, meta 0x83881df), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 461 ms_handle_reset con 0x561086958000 session 0x561086ecfa40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x56108695a400 session 0x56108944c540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616571 data_alloc: 251658240 data_used: 34724476
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x5610891ad800 session 0x56108d0a4fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 462 heartbeat osd_stat(store_statfs(0x4ef751000/0x0/0x4ffc00000, data 0x7e3de92/0x80b9000, compress 0x0/0x0/0x0, omap 0x67f3b, meta 0x83880c5), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x5610896c7800 session 0x5610887788c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 464 ms_handle_reset con 0x561086958000 session 0x56108b8c76c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3624828 data_alloc: 251658240 data_used: 34725061
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 464 ms_handle_reset con 0x56108695a400 session 0x561088da21c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561087baa000 session 0x561086d55180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 heartbeat osd_stat(store_statfs(0x4ef748000/0x0/0x4ffc00000, data 0x7e41547/0x80c0000, compress 0x0/0x0/0x0, omap 0x68664, meta 0x838799c), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.062519073s of 10.297443390s, submitted: 79
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x5610891ad800 session 0x561088f47500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625502 data_alloc: 251658240 data_used: 34725820
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561088d2e400 session 0x5610894936c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 heartbeat osd_stat(store_statfs(0x4ef748000/0x0/0x4ffc00000, data 0x7e430d5/0x80c2000, compress 0x0/0x0/0x0, omap 0x6877f, meta 0x8387881), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561086958000 session 0x56108944c700
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x56108695a400 session 0x561086cbbdc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 465 handle_osd_map epochs [465,466], i have 466, src has [1,466]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x56108b8c6fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088da2fc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958c00 session 0x561086d54a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x56108b73a1c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630068 data_alloc: 251658240 data_used: 35159996
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4ef745000/0x0/0x4ffc00000, data 0x7e44b54/0x80c5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088e20e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.169008255s of 12.200960159s, submitted: 17
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x561087b15c00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4ef745000/0x0/0x4ffc00000, data 0x7e44b54/0x80c5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3607720 data_alloc: 251658240 data_used: 34858940
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x56108cc2a380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb45000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb45000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3607720 data_alloc: 251658240 data_used: 34858940
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202850304 unmapped: 23896064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614752 data_alloc: 251658240 data_used: 36005820
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614752 data_alloc: 251658240 data_used: 36005820
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203964416 unmapped: 22781952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x56108695a400 session 0x56108b8c7a40
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.193916321s of 20.199316025s, submitted: 2
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x561088dff340
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x561087b9d180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619672 data_alloc: 251658240 data_used: 36943804
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x56108947f6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x561087b40540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.148120880s of 17.195735931s, submitted: 26
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561087b9c380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x561086e9afc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x561086ecea80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x561088d01880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x561088f47dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323678 data_alloc: 234881024 data_used: 20573514
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088e20000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3364254 data_alloc: 251658240 data_used: 27454794
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3364254 data_alloc: 251658240 data_used: 27454794
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.381216049s of 16.466739655s, submitted: 3
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 27443200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 27443200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2f000/0x0/0x4ffc00000, data 0x595eae2/0x5bdd000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3409358 data_alloc: 251658240 data_used: 27585866
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2f000/0x0/0x4ffc00000, data 0x595eae2/0x5bdd000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3409358 data_alloc: 251658240 data_used: 27585866
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x5610897ec540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3411064 data_alloc: 251658240 data_used: 27585866
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.991856575s of 14.100547791s, submitted: 33
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 27369472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199589888 unmapped: 27156480 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199606272 unmapped: 27140096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199606272 unmapped: 27140096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3419907 data_alloc: 251658240 data_used: 27585866
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199655424 unmapped: 27090944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199655424 unmapped: 27090944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 26869760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891ad800 session 0x561087bad500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425391 data_alloc: 251658240 data_used: 27585866
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 93K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 8783 syncs, 2.73 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8971 writes, 32K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 26.58 MB, 0.04 MB/s#012Interval WAL: 8971 writes, 3901 syncs, 2.30 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425903 data_alloc: 251658240 data_used: 27688266
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.739727020s of 13.804318428s, submitted: 30
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610899ecc00 session 0x561087b40a80
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1be3000/0x0/0x4ffc00000, data 0x59a7e18/0x5c29000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x5610887788c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3430433 data_alloc: 251658240 data_used: 27688266
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bdf000/0x0/0x4ffc00000, data 0x59a8e8a/0x5c2c000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2a800 session 0x561086dfb880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2e400 session 0x561088d4f880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3438302 data_alloc: 251658240 data_used: 27688266
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3438814 data_alloc: 251658240 data_used: 27790666
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x56108bcfdc00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891ad800 session 0x561087b14380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610899ecc00 session 0x561086cba000
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.265277863s of 18.318876266s, submitted: 23
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a5800 session 0x561086d55880
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2e400 session 0x561087b9d6c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3429720 data_alloc: 251658240 data_used: 27790666
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x56108947ee00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 469 handle_osd_map epochs [469,470], i have 470, src has [1,470]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 470 ms_handle_reset con 0x5610891ad800 session 0x56108b73b500
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599ce18/0x5c1e000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 25583616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 470 ms_handle_reset con 0x5610899ecc00 session 0x56108b73b180
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 25583616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 471 ms_handle_reset con 0x561088d2a400 session 0x561088d4e540
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 25550848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 471 ms_handle_reset con 0x561088d2a400 session 0x561086e9a380
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 25550848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3436132 data_alloc: 251658240 data_used: 27794664
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561088d2e400 session 0x561088da3dc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201203712 unmapped: 25542656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x5610891a2800 session 0x5610897ece00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201203712 unmapped: 25542656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f1be5000/0x0/0x4ffc00000, data 0x59a01e8/0x5c27000, compress 0x0/0x0/0x0, omap 0x6dbbe, meta 0x8382442), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561086958000 session 0x5610897eddc0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561087baa000 session 0x561087b416c0
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561086958000 session 0x561088e20e00
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3319998 data_alloc: 234881024 data_used: 20578738
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f2905000/0x0/0x4ffc00000, data 0x4c80186/0x4f06000, compress 0x0/0x0/0x0, omap 0x6dbbe, meta 0x8382442), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.971760750s of 14.100159645s, submitted: 87
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323428 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2901000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323428 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2901000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.675070763s of 11.681387901s, submitted: 10
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197558272 unmapped: 29188096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197558272 unmapped: 29188096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197656576 unmapped: 29089792 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}'
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}'
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 28778496 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197935104 unmapped: 28811264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:03:17 np0005605476 ceph-osd[87792]: do_command 'log dump' '{prefix=log dump}'
Feb  2 13:03:17 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19132 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927577961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 13:03:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:17 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19136 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} v 0)
Feb  2 13:03:17 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} : dispatch
Feb  2 13:03:18 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19140 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} v 0)
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} : dispatch
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534658312' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 13:03:18 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19142 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 13:03:18 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067950381' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 13:03:19 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19146 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 13:03:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668590428' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 13:03:19 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:03:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:03:19 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 13:03:19 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448364529' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 13:03:20 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19154 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:03:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  2 13:03:20 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400504042' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb  2 13:03:20 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19158 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:03:20 np0005605476 nova_compute[239846]: 2026-02-02 18:03:20.720 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:03:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:03:20 np0005605476 nova_compute[239846]: 2026-02-02 18:03:20.723 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:03:21 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19162 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:03:21 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 13:03:21 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072274000' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb  2 13:03:21 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19166 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:03:21 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: 2026-02-02T18:03:21.626+0000 7f7c633f1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:03:21 np0005605476 ceph-mgr[75493]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073666 data_alloc: 218103808 data_used: 8262
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073666 data_alloc: 218103808 data_used: 8262
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 17743872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fc1d0000/0x0/0x4ffc00000, data 0xd99916/0xe5a000, compress 0x0/0x0/0x0, omap 0xeb9d, meta 0x2bc1463), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.033265114s of 11.041836739s, submitted: 22
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 17612800 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 123 ms_handle_reset con 0x555b29008000 session 0x555b28de36c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 17604608 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 17473536 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b2901dc00 session 0x555b28ea01c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 17473536 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fc1cb000/0x0/0x4ffc00000, data 0xd9d0b5/0xe61000, compress 0x0/0x0/0x0, omap 0xec0b, meta 0x2bc13f5), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086711 data_alloc: 218103808 data_used: 8875
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 17203200 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b2901d400 session 0x555b28e1d180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b2901d800 session 0x555b29772700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b2901d400 session 0x555b26fd2fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 17342464 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 17342464 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b29008000 session 0x555b2a29cfc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b29bfd000 session 0x555b281cda40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 17342464 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 ms_handle_reset con 0x555b2901d000 session 0x555b26fd2000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 125 ms_handle_reset con 0x555b29008000 session 0x555b298916c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 17391616 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090574 data_alloc: 218103808 data_used: 8894
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 17391616 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 ms_handle_reset con 0x555b2901d400 session 0x555b2a41ba40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fc1c3000/0x0/0x4ffc00000, data 0xda085d/0xe67000, compress 0x0/0x0/0x0, omap 0xf60f, meta 0x2bc09f1), peers [0,2] op hist [0,0,1])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 ms_handle_reset con 0x555b2901d800 session 0x555b29d98c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 17383424 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 17383424 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.070593834s of 12.334794998s, submitted: 104
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 ms_handle_reset con 0x555b29bfd000 session 0x555b27d0f180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 17367040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 17367040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 ms_handle_reset con 0x555b2901cc00 session 0x555b27d0ea80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095228 data_alloc: 218103808 data_used: 9166
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 17235968 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fc1c2000/0x0/0x4ffc00000, data 0xda22f8/0xe6a000, compress 0x0/0x0/0x0, omap 0xfa77, meta 0x2bc0589), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 ms_handle_reset con 0x555b29008000 session 0x555b296c0700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 17235968 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 17235968 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 126 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 127 ms_handle_reset con 0x555b2901d400 session 0x555b29891a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 127 ms_handle_reset con 0x555b2901d800 session 0x555b2a29dc00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86458368 unmapped: 17227776 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 128 ms_handle_reset con 0x555b29bfd000 session 0x555b298c1180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 17088512 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 ms_handle_reset con 0x555b29002000 session 0x555b267468c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108720 data_alloc: 218103808 data_used: 9182
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc1b1000/0x0/0x4ffc00000, data 0xda76ae/0xe75000, compress 0x0/0x0/0x0, omap 0xfe31, meta 0x2bc01cf), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 ms_handle_reset con 0x555b2901d800 session 0x555b28e45180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86458368 unmapped: 17227776 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc1b1000/0x0/0x4ffc00000, data 0xda76ae/0xe75000, compress 0x0/0x0/0x0, omap 0xfe31, meta 0x2bc01cf), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 ms_handle_reset con 0x555b29008000 session 0x555b29773c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 130 ms_handle_reset con 0x555b2901d400 session 0x555b29891880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 17096704 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 130 ms_handle_reset con 0x555b29003c00 session 0x555b281b1880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b29bfd000 session 0x555b293bb340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b29003800 session 0x555b2a4368c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16949248 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86736896 unmapped: 16949248 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.533582687s of 10.833000183s, submitted: 92
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b29003c00 session 0x555b29bc9a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fc1ae000/0x0/0x4ffc00000, data 0xdaae3c/0xe7a000, compress 0x0/0x0/0x0, omap 0x10283, meta 0x2bbfd7d), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b29008000 session 0x555b2966c380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 16941056 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b2901d800 session 0x555b26fd2540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 ms_handle_reset con 0x555b29003400 session 0x555b2a444540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110926 data_alloc: 218103808 data_used: 10408
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 132 ms_handle_reset con 0x555b2901d400 session 0x555b29890380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 16777216 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 132 ms_handle_reset con 0x555b29003800 session 0x555b29ded500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 132 ms_handle_reset con 0x555b29003c00 session 0x555b2966d340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 16760832 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 133 ms_handle_reset con 0x555b29008000 session 0x555b2a437500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 16744448 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 134 ms_handle_reset con 0x555b2901d800 session 0x555b28e44a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 16744448 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fc1aa000/0x0/0x4ffc00000, data 0xdae604/0xe80000, compress 0x0/0x0/0x0, omap 0xfa94, meta 0x2bc056c), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 134 ms_handle_reset con 0x555b29003c00 session 0x555b26b4da40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 16744448 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 135 ms_handle_reset con 0x555b29003800 session 0x555b29772380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 135 ms_handle_reset con 0x555b29008000 session 0x555b296c1500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126353 data_alloc: 218103808 data_used: 10993
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 136 ms_handle_reset con 0x555b2901d400 session 0x555b29b31500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 16728064 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29003000 session 0x555b28e34540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29002c00 session 0x555b293ba380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 15663104 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29003800 session 0x555b26fd3180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88031232 unmapped: 15654912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29003c00 session 0x555b28de3880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29008000 session 0x555b26b4c380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 ms_handle_reset con 0x555b29002800 session 0x555b273b5880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 137 handle_osd_map epochs [137,138], i have 138, src has [1,138]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 138 ms_handle_reset con 0x555b2901d400 session 0x555b2a4441c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 14409728 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 14409728 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.137963295s of 10.431513786s, submitted: 143
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 139 ms_handle_reset con 0x555b29002800 session 0x555b2a436540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc19a000/0x0/0x4ffc00000, data 0xdb71cc/0xe90000, compress 0x0/0x0/0x0, omap 0xea8f, meta 0x2bc1571), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 140 ms_handle_reset con 0x555b29002c00 session 0x555b298c1500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140637 data_alloc: 218103808 data_used: 12263
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 14409728 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 140 ms_handle_reset con 0x555b29003c00 session 0x555b28de2c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 140 ms_handle_reset con 0x555b29003800 session 0x555b2966cc40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 140 ms_handle_reset con 0x555b29008000 session 0x555b281b0c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 141 ms_handle_reset con 0x555b29002800 session 0x555b28ea0700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89219072 unmapped: 14467072 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 141 ms_handle_reset con 0x555b29002c00 session 0x555b2948a380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 141 ms_handle_reset con 0x555b29003c00 session 0x555b28de2e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 141 ms_handle_reset con 0x555b2901d400 session 0x555b29772000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 14491648 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141099 data_alloc: 218103808 data_used: 12519
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 142 ms_handle_reset con 0x555b29002800 session 0x555b281cc380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc193000/0x0/0x4ffc00000, data 0xdbc41f/0xe97000, compress 0x0/0x0/0x0, omap 0xece7, meta 0x2bc1319), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 143 ms_handle_reset con 0x555b29002c00 session 0x555b28e45500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc18c000/0x0/0x4ffc00000, data 0xdbfaf4/0xe9e000, compress 0x0/0x0/0x0, omap 0xeeef, meta 0x2bc1111), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 143 ms_handle_reset con 0x555b29003c00 session 0x555b28e45c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 143 ms_handle_reset con 0x555b29008000 session 0x555b2a4296c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147773 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.513998032s of 10.609919548s, submitted: 57
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 14499840 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 ms_handle_reset con 0x555b29002400 session 0x555b281b01c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 ms_handle_reset con 0x555b29002400 session 0x555b28e44e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc18a000/0x0/0x4ffc00000, data 0xdc16da/0xea0000, compress 0x0/0x0/0x0, omap 0xf0d1, meta 0x2bc0f2f), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149549 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc18a000/0x0/0x4ffc00000, data 0xdc16da/0xea0000, compress 0x0/0x0/0x0, omap 0xf0d1, meta 0x2bc0f2f), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 144 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b29003c00 session 0x555b2a29ce00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152323 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b29008000 session 0x555b29773500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.074001312s of 11.105423927s, submitted: 39
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b2901d000 session 0x555b28e44540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc187000/0x0/0x4ffc00000, data 0xdc3159/0xea3000, compress 0x0/0x0/0x0, omap 0xf30b, meta 0x2bc0cf5), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b29bf6000 session 0x555b2a41a700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154889 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b29002400 session 0x555b29b55880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 ms_handle_reset con 0x555b29003c00 session 0x555b2966ddc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 14663680 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b29008000 session 0x555b28ea0540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 14647296 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 14647296 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b2901d000 session 0x555b26fd3c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b29bf4400 session 0x555b281b1340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b29002400 session 0x555b2966c700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b29003c00 session 0x555b28ea1c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc184000/0x0/0x4ffc00000, data 0xdc4d67/0xea8000, compress 0x0/0x0/0x0, omap 0xe46f, meta 0x2bc1b91), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b29008000 session 0x555b29b30e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 14680064 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 ms_handle_reset con 0x555b2901d000 session 0x555b29699c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 ms_handle_reset con 0x555b29002c00 session 0x555b28e35340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165570 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 14655488 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 ms_handle_reset con 0x555b29002400 session 0x555b28de2a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 ms_handle_reset con 0x555b29003c00 session 0x555b273b56c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.010939598s of 10.143727303s, submitted: 79
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc180000/0x0/0x4ffc00000, data 0xdc6947/0xeaa000, compress 0x0/0x0/0x0, omap 0x14ae7, meta 0x2bbb519), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89071616 unmapped: 14614528 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 ms_handle_reset con 0x555b29008000 session 0x555b298c16c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 ms_handle_reset con 0x555b2901d000 session 0x555b2966da40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc183000/0x0/0x4ffc00000, data 0xdc68e5/0xea9000, compress 0x0/0x0/0x0, omap 0x15117, meta 0x2bbaee9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162944 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc183000/0x0/0x4ffc00000, data 0xdc68e5/0xea9000, compress 0x0/0x0/0x0, omap 0x15117, meta 0x2bbaee9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc183000/0x0/0x4ffc00000, data 0xdc68e5/0xea9000, compress 0x0/0x0/0x0, omap 0x15117, meta 0x2bbaee9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc183000/0x0/0x4ffc00000, data 0xdc68e5/0xea9000, compress 0x0/0x0/0x0, omap 0x15117, meta 0x2bbaee9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166438 data_alloc: 218103808 data_used: 12791
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 148 ms_handle_reset con 0x555b29002800 session 0x555b2a429a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 148 ms_handle_reset con 0x555b29002400 session 0x555b29b31a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdc8364/0xeac000, compress 0x0/0x0/0x0, omap 0x152ed, meta 0x2bbad13), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 14606336 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.717338562s of 10.783451080s, submitted: 52
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b273b48c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 14737408 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b2a428c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901d000 session 0x555b29bc9500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29bfd000 session 0x555b28e35a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29bf4400 session 0x555b29b55340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 14712832 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b281cd340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 14696448 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9f10/0xeb0000, compress 0x0/0x0/0x0, omap 0x166a6, meta 0x2bb995a), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b29773a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b27d19340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b2a29ca80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901d000 session 0x555b2966d880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172522 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x169ef, meta 0x2bb9611), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172522 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 14671872 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.375376701s of 10.438738823s, submitted: 37
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b281b08c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 14655488 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b2948a000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 14761984 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b28e45a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b281cd6c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b29d99dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 14721024 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182957 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc177000/0x0/0x4ffc00000, data 0xdc9fe4/0xeb3000, compress 0x0/0x0/0x0, omap 0x16571, meta 0x2bb9a8f), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b2a437a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 14712832 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b27d14540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b29890700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901d000 session 0x555b298c1a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b27d14fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b26746540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16807, meta 0x2bb97f9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b29891dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b2a429340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29bf4400 session 0x555b2966c8c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177389 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b298c0540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 14753792 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 14753792 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 14753792 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.344865799s of 10.541636467s, submitted: 57
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b28ea08c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 14753792 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b2a436000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b293bae00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16937, meta 0x2bb96c9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177389 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 14745600 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16937, meta 0x2bb96c9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16937, meta 0x2bb96c9), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b27daa000 session 0x555b2757ec40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b298c01c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 14721024 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184133 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 14721024 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b2a428540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b296c0fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 14688256 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b26b4ce00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29d29000 session 0x555b2a29c380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 14688256 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 14688256 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.579617500s of 10.639943123s, submitted: 35
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29002400 session 0x555b28de3340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 14688256 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9f62/0xeb0000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182997 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 14688256 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29003c00 session 0x555b28e35500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b29008000 session 0x555b297728c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182256 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f00/0xeaf000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182256 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.846218109s of 11.860869408s, submitted: 10
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 ms_handle_reset con 0x555b2901cc00 session 0x555b29772540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9f62/0xeb0000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9f62/0xeb0000, compress 0x0/0x0/0x0, omap 0x16bcd, meta 0x2bb9433), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183962 data_alloc: 218103808 data_used: 13376
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89071616 unmapped: 14614528 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 150 ms_handle_reset con 0x555b29bfd000 session 0x555b27d14380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 150 ms_handle_reset con 0x555b2901d000 session 0x555b296c0380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 14630912 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 152 ms_handle_reset con 0x555b29002400 session 0x555b26b4d880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc16b000/0x0/0x4ffc00000, data 0xdcf318/0xebb000, compress 0x0/0x0/0x0, omap 0x16e56, meta 0x2bb91aa), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 14622720 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 ms_handle_reset con 0x555b29003c00 session 0x555b26fd3500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 ms_handle_reset con 0x555b29bf4400 session 0x555b2a41ae00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 14622720 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 ms_handle_reset con 0x555b29008000 session 0x555b2a4361c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200955 data_alloc: 218103808 data_used: 13509
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 ms_handle_reset con 0x555b29002400 session 0x555b2a4376c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 10084352 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fc16b000/0x0/0x4ffc00000, data 0xdd13c7/0xebf000, compress 0x0/0x0/0x0, omap 0x16f33, meta 0x2bb90cd), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.899493217s of 10.953438759s, submitted: 32
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 10084352 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 154 ms_handle_reset con 0x555b29003c00 session 0x555b2a428700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 155 ms_handle_reset con 0x555b29008000 session 0x555b2966c1c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 94642176 unmapped: 12722176 heap: 107364352 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fc16a000/0x0/0x4ffc00000, data 0xdd2f7f/0xec2000, compress 0x0/0x0/0x0, omap 0x17122, meta 0x2bb8ede), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 155 ms_handle_reset con 0x555b2901cc00 session 0x555b2a41b880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 12533760 heap: 107364352 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fba3a000/0x0/0x4ffc00000, data 0x14ffb8b/0x15f0000, compress 0x0/0x0/0x0, omap 0x171c8, meta 0x2bb8e38), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 156 ms_handle_reset con 0x555b29bf4400 session 0x555b29d996c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 12509184 heap: 107364352 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 156 ms_handle_reset con 0x555b29003c00 session 0x555b28e34380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 ms_handle_reset con 0x555b29002400 session 0x555b2757ee00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 ms_handle_reset con 0x555b29008000 session 0x555b28ea0000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311897 data_alloc: 218103808 data_used: 4669006
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 ms_handle_reset con 0x555b2901d000 session 0x555b28116e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 ms_handle_reset con 0x555b2901cc00 session 0x555b29bc9c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 13615104 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0x1b2c25c/0x1c20000, compress 0x0/0x0/0x0, omap 0x17444, meta 0x3d58bbc), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 ms_handle_reset con 0x555b29003c00 session 0x555b2a429500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 13615104 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 158 ms_handle_reset con 0x555b29002400 session 0x555b2966c540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 158 ms_handle_reset con 0x555b29008000 session 0x555b29d98fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 158 ms_handle_reset con 0x555b2901d000 session 0x555b28116700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 158 ms_handle_reset con 0x555b29002c00 session 0x555b26fd2700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 13557760 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 159 ms_handle_reset con 0x555b29002400 session 0x555b29d99c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 159 ms_handle_reset con 0x555b29002c00 session 0x555b281cce00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 13557760 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97550336 unmapped: 13492224 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 ms_handle_reset con 0x555b29008000 session 0x555b29dec540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 ms_handle_reset con 0x555b2901d000 session 0x555b296996c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fa264000/0x0/0x4ffc00000, data 0x1b2f5a9/0x1c26000, compress 0x0/0x0/0x0, omap 0x178d8, meta 0x3d58728), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319611 data_alloc: 218103808 data_used: 4667653
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 13484032 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.317519188s of 10.056072235s, submitted: 201
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 ms_handle_reset con 0x555b29002800 session 0x555b2a41bdc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 97402880 unmapped: 13639680 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104898560 unmapped: 6144000 heap: 111042560 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 ms_handle_reset con 0x555b29002c00 session 0x555b26b4c8c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 ms_handle_reset con 0x555b2a9bc000 session 0x555b281cd6c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 112041984 unmapped: 49152 heap: 112091136 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 161 ms_handle_reset con 0x555b2a9bc800 session 0x555b281b1500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 974848 heap: 113139712 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b29008000 session 0x555b296988c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b2a9bc400 session 0x555b281b1180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b2a9bc400 session 0x555b28116a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b2901d000 session 0x555b2a436700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343973 data_alloc: 234881024 data_used: 11969497
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b29002c00 session 0x555b273b4540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 heartbeat osd_stat(store_statfs(0x4fa25b000/0x0/0x4ffc00000, data 0x1b32e95/0x1c2f000, compress 0x0/0x0/0x0, omap 0x17a24, meta 0x3d585dc), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 4505600 heap: 113139712 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b29008000 session 0x555b29bc9a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 ms_handle_reset con 0x555b2a9bc000 session 0x555b267468c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 4497408 heap: 113139712 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 ms_handle_reset con 0x555b29002c00 session 0x555b29ded180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 ms_handle_reset con 0x555b29008000 session 0x555b281b0380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 ms_handle_reset con 0x555b29002400 session 0x555b28de3dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 ms_handle_reset con 0x555b29003c00 session 0x555b28e34e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0x150d593/0x1608000, compress 0x0/0x0/0x0, omap 0x173a8, meta 0x3d58c58), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 5488640 heap: 114188288 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 ms_handle_reset con 0x555b2901d000 session 0x555b296c16c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 12640256 heap: 114188288 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 heartbeat osd_stat(store_statfs(0x4fafae000/0x0/0x4ffc00000, data 0xde2583/0xedc000, compress 0x0/0x0/0x0, omap 0x16d2c, meta 0x3d592d4), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 164 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d18a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 164 ms_handle_reset con 0x555b29002400 session 0x555b28117880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 12640256 heap: 114188288 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258543 data_alloc: 218103808 data_used: 4672276
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 12640256 heap: 114188288 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 164 heartbeat osd_stat(store_statfs(0x4fafaa000/0x0/0x4ffc00000, data 0xde413d/0xede000, compress 0x0/0x0/0x0, omap 0x16d3b, meta 0x3d592c5), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2901d000 session 0x555b28117180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 12632064 heap: 114188288 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2a9bc800 session 0x555b27d14700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2a9bcc00 session 0x555b29890a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b29002400 session 0x555b2a428380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2901d000 session 0x555b28de21c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.931444168s of 11.206422806s, submitted: 162
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 19111936 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d0e8c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2a9bc800 session 0x555b2a29c000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 ms_handle_reset con 0x555b2a9bd000 session 0x555b2966d500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 19193856 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 heartbeat osd_stat(store_statfs(0x4fab2f000/0x0/0x4ffc00000, data 0x1261bf4/0x135d000, compress 0x0/0x0/0x0, omap 0x16de1, meta 0x3d5921f), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 19193856 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b29002400 session 0x555b2a445180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302102 data_alloc: 218103808 data_used: 4672548
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 19193856 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2901d000 session 0x555b293bac40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc400 session 0x555b29b31dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc800 session 0x555b296c1a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bd000 session 0x555b27d14a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 19308544 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b29002400 session 0x555b28e34fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 19308544 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2901d000 session 0x555b29b54a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc400 session 0x555b28e35dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc800 session 0x555b26747880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bd400 session 0x555b29b308c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101310464 unmapped: 19177472 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc400 session 0x555b281cdc00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bc800 session 0x555b273b56c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 ms_handle_reset con 0x555b2a9bdc00 session 0x555b298c16c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 102260736 unmapped: 18227200 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2901d000 session 0x555b27d141c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b29002400 session 0x555b28ea1dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fab28000/0x0/0x4ffc00000, data 0x1263734/0x1364000, compress 0x0/0x0/0x0, omap 0x15f8e, meta 0x3d5a072), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2901d000 session 0x555b28de2540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282542 data_alloc: 218103808 data_used: 4672820
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 19570688 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc400 session 0x555b29bc8000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc800 session 0x555b296c0000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 19562496 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bdc00 session 0x555b28e45880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b27298400 session 0x555b28de2000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b26999400 session 0x555b27d15a40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc400 session 0x555b28ea0fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2901d000 session 0x555b2a429180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.718855858s of 10.019176483s, submitted: 164
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc800 session 0x555b29d99880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 19259392 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 19259392 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bdc00 session 0x555b2a437dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fafa4000/0x0/0x4ffc00000, data 0xde911e/0xee8000, compress 0x0/0x0/0x0, omap 0x15360, meta 0x3d5aca0), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b26999400 session 0x555b29698c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2901d000 session 0x555b2a437180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 19316736 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fafa4000/0x0/0x4ffc00000, data 0xde9170/0xee8000, compress 0x0/0x0/0x0, omap 0x16620, meta 0x3d599e0), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc800 session 0x555b28ea0000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc400 session 0x555b26b4cc40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bdc00 session 0x555b27d15880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284572 data_alloc: 218103808 data_used: 4676897
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 19316736 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b26999400 session 0x555b28e34c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2901d000 session 0x555b29ded6c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 19267584 heap: 120487936 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 36356096 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 27803648 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7fa4000/0x0/0x4ffc00000, data 0x3de9138/0x3ee8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [1,0,1])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f67a4000/0x0/0x4ffc00000, data 0x55e9138/0x56e8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 109551616 unmapped: 27721728 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1827124 data_alloc: 218103808 data_used: 4676916
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 35930112 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 35782656 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4effa4000/0x0/0x4ffc00000, data 0xbde9140/0xbee8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.094546318s of 10.088048935s, submitted: 106
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 35389440 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 35274752 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 35110912 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2575500 data_alloc: 218103808 data_used: 4676916
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 33882112 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x105e9150/0x106e8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 33628160 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 33251328 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread fragmentation_score=0.000228 took=0.000036s
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104136704 unmapped: 33136640 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 32931840 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d15dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282636 data_alloc: 218103808 data_used: 4676916
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 ms_handle_reset con 0x555b2a9bc800 session 0x555b281cd500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 32776192 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 32604160 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 heartbeat osd_stat(store_statfs(0x4e17a4000/0x0/0x4ffc00000, data 0x1a5e9170/0x1a6e8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 ms_handle_reset con 0x555b2a9bdc00 session 0x555b28de3180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 heartbeat osd_stat(store_statfs(0x4e17a4000/0x0/0x4ffc00000, data 0x1a5e9170/0x1a6e8000, compress 0x0/0x0/0x0, omap 0x16535, meta 0x3d59acb), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 32587776 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.690764427s of 10.115633011s, submitted: 111
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 ms_handle_reset con 0x555b26999400 session 0x555b273b4000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 ms_handle_reset con 0x555b2901d000 session 0x555b28116700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 ms_handle_reset con 0x555b2a9bc400 session 0x555b296c1c00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 32587776 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 32579584 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2a9bc800 session 0x555b2a444380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b28190c00 session 0x555b26fd21c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b29002c00 session 0x555b29772c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 heartbeat osd_stat(store_statfs(0x4e17a0000/0x0/0x4ffc00000, data 0x1a5ead4e/0x1a6ec000, compress 0x0/0x0/0x0, omap 0x17430, meta 0x3d58bd0), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2901d000 session 0x555b2590dc00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364377 data_alloc: 218103808 data_used: 4676916
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b26999400 session 0x555b28e1c700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2a9bc400 session 0x555b296c0a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2a9bc800 session 0x555b29b55180
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104259584 unmapped: 33013760 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b26999400 session 0x555b281b0e00
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 33005568 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b29002c00 session 0x555b27d18700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2901d000 session 0x555b29772a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25403392 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2a9bc800 session 0x555b2757f500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b2a9bc400 session 0x555b28e44c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b26999400 session 0x555b29d99500
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 ms_handle_reset con 0x555b29002800 session 0x555b26746380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 30408704 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106938368 unmapped: 30334976 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b29002c00 session 0x555b28e1c1c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 heartbeat osd_stat(store_statfs(0x4fa46e000/0x0/0x4ffc00000, data 0x191e838/0x1a1e000, compress 0x0/0x0/0x0, omap 0x168ef, meta 0x3d59711), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448584 data_alloc: 218103808 data_used: 4680946
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b2901d000 session 0x555b28ea0540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b2a9bc800 session 0x555b298c1340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b26999400 session 0x555b27d188c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b29002800 session 0x555b2a29d6c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b29002c00 session 0x555b29773dc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.845857620s of 11.332592964s, submitted: 226
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 ms_handle_reset con 0x555b2a9bc800 session 0x555b2a29c540
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 171 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0x19203f0/0x1a21000, compress 0x0/0x0/0x0, omap 0x16ee2, meta 0x3d5911e), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 171 ms_handle_reset con 0x555b2901d000 session 0x555b2966d6c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452362 data_alloc: 218103808 data_used: 4680930
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 30326784 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 172 ms_handle_reset con 0x555b26999400 session 0x555b27d0e700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 172 heartbeat osd_stat(store_statfs(0x4fa466000/0x0/0x4ffc00000, data 0x1921fc4/0x1a24000, compress 0x0/0x0/0x0, omap 0x16f88, meta 0x3d59078), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 30302208 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 ms_handle_reset con 0x555b29002c00 session 0x555b27d15340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 ms_handle_reset con 0x555b2a9bc800 session 0x555b28de2fc0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 ms_handle_reset con 0x555b29002800 session 0x555b2757e1c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 30302208 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 ms_handle_reset con 0x555b2b3ac400 session 0x555b29d981c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 ms_handle_reset con 0x555b2b3ac000 session 0x555b29d99880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 174 ms_handle_reset con 0x555b29002800 session 0x555b29d98c40
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 174 ms_handle_reset con 0x555b26999400 session 0x555b281cd340
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 30253056 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b29002c00 session 0x555b296c0000
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b2a9bc800 session 0x555b273b48c0
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b29002800 session 0x555b27d0f880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b26999400 session 0x555b273b4a80
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 30253056 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1928ecc/0x1a2f000, compress 0x0/0x0/0x0, omap 0x1717a, meta 0x3d58e86), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b29002c00 session 0x555b293ba700
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465384 data_alloc: 218103808 data_used: 4682100
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 30253056 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 ms_handle_reset con 0x555b2b3ac000 session 0x555b29d98380
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1928ecc/0x1a2f000, compress 0x0/0x0/0x0, omap 0x1717a, meta 0x3d58e86), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 30498816 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fa45d000/0x0/0x4ffc00000, data 0x1928ecc/0x1a2f000, compress 0x0/0x0/0x0, omap 0x16c4a, meta 0x3d593b6), peers [0,2] op hist [])
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 30498816 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 175 handle_osd_map epochs [175,176], i have 176, src has [1,176]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 176 ms_handle_reset con 0x555b2b3ac800 session 0x555b281cd880
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 30490624 heap: 137273344 old mem: 2845415832 new mem: 2845415832
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Feb  2 13:03:21 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.072850227s of 10.656076431s, submitted: 63
Feb  2 13:07:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:12 np0005605476 rsyslogd[1006]: imjournal: 20473 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb  2 13:07:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:15 np0005605476 nova_compute[239846]: 2026-02-02 18:07:15.848 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:20 np0005605476 nova_compute[239846]: 2026-02-02 18:07:20.850 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:25 np0005605476 nova_compute[239846]: 2026-02-02 18:07:25.851 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 13:07:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.853 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.854 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.854 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.854 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.855 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:07:30 np0005605476 nova_compute[239846]: 2026-02-02 18:07:30.856 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:35 np0005605476 nova_compute[239846]: 2026-02-02 18:07:35.856 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:07:36
Feb  2 13:07:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:07:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:07:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Feb  2 13:07:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:07:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:39 np0005605476 podman[284054]: 2026-02-02 18:07:39.612291406 +0000 UTC m=+0.057644736 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 13:07:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:40 np0005605476 nova_compute[239846]: 2026-02-02 18:07:40.509 239853 DEBUG oslo_concurrency.processutils [None req-6f30ed06-bfe4-4711-ac03-56d24600d5f9 c12d6d0fca2548e7a5504cbd580cc611 628bef10fb3a45d18abe453a0d66d537 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:07:40 np0005605476 nova_compute[239846]: 2026-02-02 18:07:40.533 239853 DEBUG oslo_concurrency.processutils [None req-6f30ed06-bfe4-4711-ac03-56d24600d5f9 c12d6d0fca2548e7a5504cbd580cc611 628bef10fb3a45d18abe453a0d66d537 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:07:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:40 np0005605476 nova_compute[239846]: 2026-02-02 18:07:40.857 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:42 np0005605476 nova_compute[239846]: 2026-02-02 18:07:42.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:42 np0005605476 podman[284075]: 2026-02-02 18:07:42.630037346 +0000 UTC m=+0.073141831 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 13:07:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:07:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:07:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:07:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:45 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.359042116 +0000 UTC m=+0.054211787 container create 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 13:07:45 np0005605476 systemd[1]: Started libpod-conmon-74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c.scope.
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.326807631 +0000 UTC m=+0.021977322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.443588013 +0000 UTC m=+0.138757704 container init 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.450938184 +0000 UTC m=+0.146107845 container start 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.454663541 +0000 UTC m=+0.149833302 container attach 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 13:07:45 np0005605476 jolly_ride[284261]: 167 167
Feb  2 13:07:45 np0005605476 systemd[1]: libpod-74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c.scope: Deactivated successfully.
Feb  2 13:07:45 np0005605476 conmon[284261]: conmon 74e0329f3558fda7a79e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c.scope/container/memory.events
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.459036947 +0000 UTC m=+0.154206618 container died 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 13:07:45 np0005605476 systemd[1]: var-lib-containers-storage-overlay-058f535c2265269019436b9e7c41b575ecff78f5e3659ae77b24ba8f4c40d664-merged.mount: Deactivated successfully.
Feb  2 13:07:45 np0005605476 podman[284244]: 2026-02-02 18:07:45.544669905 +0000 UTC m=+0.239839586 container remove 74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:07:45 np0005605476 systemd[1]: libpod-conmon-74e0329f3558fda7a79e41b67a1f5c0801e8ae27746d204c347aab54c48af48c.scope: Deactivated successfully.
Feb  2 13:07:45 np0005605476 podman[284282]: 2026-02-02 18:07:45.673698098 +0000 UTC m=+0.037237360 container create 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 13:07:45 np0005605476 systemd[1]: Started libpod-conmon-64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473.scope.
Feb  2 13:07:45 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:45 np0005605476 podman[284282]: 2026-02-02 18:07:45.657428351 +0000 UTC m=+0.020967643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:45 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:45 np0005605476 podman[284282]: 2026-02-02 18:07:45.773453362 +0000 UTC m=+0.136992654 container init 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:07:45 np0005605476 podman[284282]: 2026-02-02 18:07:45.780082482 +0000 UTC m=+0.143621744 container start 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb  2 13:07:45 np0005605476 podman[284282]: 2026-02-02 18:07:45.786544017 +0000 UTC m=+0.150083299 container attach 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 13:07:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:45 np0005605476 nova_compute[239846]: 2026-02-02 18:07:45.859 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:46 np0005605476 boring_hofstadter[284299]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:07:46 np0005605476 boring_hofstadter[284299]: --> All data devices are unavailable
Feb  2 13:07:46 np0005605476 systemd[1]: libpod-64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473.scope: Deactivated successfully.
Feb  2 13:07:46 np0005605476 conmon[284299]: conmon 64897309dc85d3bbf362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473.scope/container/memory.events
Feb  2 13:07:46 np0005605476 podman[284282]: 2026-02-02 18:07:46.257202068 +0000 UTC m=+0.620741330 container died 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 13:07:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-02e8c173396e0eb14ef29181fd222d53e986d111fff4df2256d539267d692d4d-merged.mount: Deactivated successfully.
Feb  2 13:07:46 np0005605476 podman[284282]: 2026-02-02 18:07:46.401569753 +0000 UTC m=+0.765109015 container remove 64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 13:07:46 np0005605476 systemd[1]: libpod-conmon-64897309dc85d3bbf36246f150915e12d48ec303a963a8cec2e8903aa3d5b473.scope: Deactivated successfully.
Feb  2 13:07:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:46.659 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:07:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:46.659 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:07:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:46.659 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.842540521 +0000 UTC m=+0.039753142 container create 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:07:46 np0005605476 systemd[1]: Started libpod-conmon-74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f.scope.
Feb  2 13:07:46 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.915746023 +0000 UTC m=+0.112958694 container init 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.825248505 +0000 UTC m=+0.022461156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.920239952 +0000 UTC m=+0.117452573 container start 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.923574487 +0000 UTC m=+0.120787198 container attach 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 13:07:46 np0005605476 laughing_bell[284409]: 167 167
Feb  2 13:07:46 np0005605476 systemd[1]: libpod-74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f.scope: Deactivated successfully.
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.927662835 +0000 UTC m=+0.124875496 container died 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 13:07:46 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f44f4ff8eb17264001b36abcb0b57a4d579a701e8ff7c03c63c800b412a011b0-merged.mount: Deactivated successfully.
Feb  2 13:07:46 np0005605476 podman[284392]: 2026-02-02 18:07:46.972995166 +0000 UTC m=+0.170207807 container remove 74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 13:07:46 np0005605476 systemd[1]: libpod-conmon-74c26a42a9d0db0186967cb7e07ca6967ca8e46ea4359c1cc5fdaf82b05a4b1f.scope: Deactivated successfully.
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.123425084 +0000 UTC m=+0.052822447 container create 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:07:47 np0005605476 systemd[1]: Started libpod-conmon-2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7.scope.
Feb  2 13:07:47 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7faf77f9aac7cf9dd5e2ed379a05e609c539b0f105d65fc931e7c1f65ef2467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7faf77f9aac7cf9dd5e2ed379a05e609c539b0f105d65fc931e7c1f65ef2467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7faf77f9aac7cf9dd5e2ed379a05e609c539b0f105d65fc931e7c1f65ef2467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:47 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7faf77f9aac7cf9dd5e2ed379a05e609c539b0f105d65fc931e7c1f65ef2467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.102271247 +0000 UTC m=+0.031668640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.201751423 +0000 UTC m=+0.131148806 container init 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.210277828 +0000 UTC m=+0.139675191 container start 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.213264313 +0000 UTC m=+0.142661696 container attach 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]: {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    "0": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "devices": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "/dev/loop3"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            ],
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_name": "ceph_lv0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_size": "21470642176",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "name": "ceph_lv0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "tags": {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_name": "ceph",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.crush_device_class": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.encrypted": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.objectstore": "bluestore",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_id": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.vdo": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.with_tpm": "0"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            },
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "vg_name": "ceph_vg0"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        }
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    ],
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    "1": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "devices": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "/dev/loop4"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            ],
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_name": "ceph_lv1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_size": "21470642176",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "name": "ceph_lv1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "tags": {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_name": "ceph",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.crush_device_class": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.encrypted": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.objectstore": "bluestore",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_id": "1",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.vdo": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.with_tpm": "0"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            },
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "vg_name": "ceph_vg1"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        }
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    ],
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    "2": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "devices": [
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "/dev/loop5"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            ],
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_name": "ceph_lv2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_size": "21470642176",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "name": "ceph_lv2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "tags": {
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.cluster_name": "ceph",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.crush_device_class": "",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.encrypted": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.objectstore": "bluestore",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osd_id": "2",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.vdo": "0",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:                "ceph.with_tpm": "0"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            },
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "type": "block",
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:            "vg_name": "ceph_vg2"
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:        }
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]:    ]
Feb  2 13:07:47 np0005605476 musing_ardinghelli[284449]: }
Feb  2 13:07:47 np0005605476 systemd[1]: libpod-2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7.scope: Deactivated successfully.
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.504256437 +0000 UTC m=+0.433653890 container died 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:07:47 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c7faf77f9aac7cf9dd5e2ed379a05e609c539b0f105d65fc931e7c1f65ef2467-merged.mount: Deactivated successfully.
Feb  2 13:07:47 np0005605476 podman[284432]: 2026-02-02 18:07:47.543101422 +0000 UTC m=+0.472498785 container remove 2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 13:07:47 np0005605476 systemd[1]: libpod-conmon-2f8205b4ce0b419e7f2d9efbf7dcb0fb58097fdcaede7b83322b76318755dcd7.scope: Deactivated successfully.
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:07:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:47.758 155391 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:7f:4f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:00:77:69:d3:d2'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 13:07:47 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:47.760 155391 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 13:07:47 np0005605476 nova_compute[239846]: 2026-02-02 18:07:47.764 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:47 np0005605476 podman[284530]: 2026-02-02 18:07:47.971370636 +0000 UTC m=+0.039566737 container create 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:07:48 np0005605476 systemd[1]: Started libpod-conmon-327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a.scope.
Feb  2 13:07:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:48.042616381 +0000 UTC m=+0.110812462 container init 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:48.04885243 +0000 UTC m=+0.117048491 container start 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:47.955006646 +0000 UTC m=+0.023202737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:48.052280889 +0000 UTC m=+0.120476980 container attach 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 13:07:48 np0005605476 systemd[1]: libpod-327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a.scope: Deactivated successfully.
Feb  2 13:07:48 np0005605476 wonderful_rhodes[284547]: 167 167
Feb  2 13:07:48 np0005605476 conmon[284547]: conmon 327b4ef6ae09876385b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a.scope/container/memory.events
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:48.055697467 +0000 UTC m=+0.123893528 container died 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:07:48 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f4f19608f1f02a689e1f9e05f5dfa23b2a4897e9ddc806b8b54a4cfff0338d8e-merged.mount: Deactivated successfully.
Feb  2 13:07:48 np0005605476 podman[284530]: 2026-02-02 18:07:48.090897817 +0000 UTC m=+0.159093878 container remove 327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_rhodes, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:07:48 np0005605476 systemd[1]: libpod-conmon-327b4ef6ae09876385b89b2f6762b95398001d776227ca9a20833992ef761a3a.scope: Deactivated successfully.
Feb  2 13:07:48 np0005605476 podman[284571]: 2026-02-02 18:07:48.236764585 +0000 UTC m=+0.048730180 container create 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 13:07:48 np0005605476 systemd[1]: Started libpod-conmon-2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4.scope.
Feb  2 13:07:48 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:07:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95b8500b81f6a1f1d34216907b3d9deaa09ad903bec4ef16cce887b24970b46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95b8500b81f6a1f1d34216907b3d9deaa09ad903bec4ef16cce887b24970b46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95b8500b81f6a1f1d34216907b3d9deaa09ad903bec4ef16cce887b24970b46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:48 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95b8500b81f6a1f1d34216907b3d9deaa09ad903bec4ef16cce887b24970b46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:07:48 np0005605476 podman[284571]: 2026-02-02 18:07:48.213223509 +0000 UTC m=+0.025189194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:07:48 np0005605476 podman[284571]: 2026-02-02 18:07:48.314488876 +0000 UTC m=+0.126454521 container init 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:07:48 np0005605476 podman[284571]: 2026-02-02 18:07:48.320817138 +0000 UTC m=+0.132782723 container start 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb  2 13:07:48 np0005605476 podman[284571]: 2026-02-02 18:07:48.324936826 +0000 UTC m=+0.136902511 container attach 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:07:48 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:07:48.761 155391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=13051b64-c07e-4136-ad5c-993d3a84d93c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 13:07:48 np0005605476 lvm[284668]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:07:48 np0005605476 lvm[284668]: VG ceph_vg1 finished
Feb  2 13:07:48 np0005605476 lvm[284666]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:07:48 np0005605476 lvm[284666]: VG ceph_vg0 finished
Feb  2 13:07:48 np0005605476 lvm[284670]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:07:48 np0005605476 lvm[284670]: VG ceph_vg2 finished
Feb  2 13:07:49 np0005605476 frosty_darwin[284589]: {}
Feb  2 13:07:49 np0005605476 systemd[1]: libpod-2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4.scope: Deactivated successfully.
Feb  2 13:07:49 np0005605476 podman[284571]: 2026-02-02 18:07:49.156048154 +0000 UTC m=+0.968013749 container died 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 13:07:49 np0005605476 systemd[1]: libpod-2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4.scope: Consumed 1.186s CPU time.
Feb  2 13:07:49 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f95b8500b81f6a1f1d34216907b3d9deaa09ad903bec4ef16cce887b24970b46-merged.mount: Deactivated successfully.
Feb  2 13:07:49 np0005605476 podman[284571]: 2026-02-02 18:07:49.207414258 +0000 UTC m=+1.019379903 container remove 2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 13:07:49 np0005605476 systemd[1]: libpod-conmon-2c76e35df4475712beafdba05bd56310e0b9640d1a2de1245cb1fd89488c10f4.scope: Deactivated successfully.
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.268 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.269 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.269 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.270 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.270 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:07:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142274355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.797 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:07:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.932 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.933 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.934 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.934 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.995 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:07:49 np0005605476 nova_compute[239846]: 2026-02-02 18:07:49.995 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.009 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:07:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:07:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:07:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532222425' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.559 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.565 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.583 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.584 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.585 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:07:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:50 np0005605476 nova_compute[239846]: 2026-02-02 18:07:50.860 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:51 np0005605476 nova_compute[239846]: 2026-02-02 18:07:51.585 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:51 np0005605476 nova_compute[239846]: 2026-02-02 18:07:51.585 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:54 np0005605476 nova_compute[239846]: 2026-02-02 18:07:54.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.257 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.257 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.257 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.269 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:07:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:07:55 np0005605476 nova_compute[239846]: 2026-02-02 18:07:55.862 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:07:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:57 np0005605476 nova_compute[239846]: 2026-02-02 18:07:57.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:07:58 np0005605476 nova_compute[239846]: 2026-02-02 18:07:58.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:07:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:00 np0005605476 nova_compute[239846]: 2026-02-02 18:08:00.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:00 np0005605476 nova_compute[239846]: 2026-02-02 18:08:00.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:08:00 np0005605476 nova_compute[239846]: 2026-02-02 18:08:00.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:00 np0005605476 nova_compute[239846]: 2026-02-02 18:08:00.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 13:08:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:00 np0005605476 nova_compute[239846]: 2026-02-02 18:08:00.864 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:08:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:08:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414307552' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:08:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:08:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414307552' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:08:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.865 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.866 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.866 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.867 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.867 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:08:05 np0005605476 nova_compute[239846]: 2026-02-02 18:08:05.868 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:10 np0005605476 podman[284753]: 2026-02-02 18:08:10.608136336 +0000 UTC m=+0.055721291 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb  2 13:08:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:10 np0005605476 nova_compute[239846]: 2026-02-02 18:08:10.868 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:12 np0005605476 nova_compute[239846]: 2026-02-02 18:08:12.492 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:13 np0005605476 nova_compute[239846]: 2026-02-02 18:08:13.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:13 np0005605476 nova_compute[239846]: 2026-02-02 18:08:13.243 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 13:08:13 np0005605476 nova_compute[239846]: 2026-02-02 18:08:13.262 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 13:08:13 np0005605476 podman[284773]: 2026-02-02 18:08:13.626814391 +0000 UTC m=+0.075595561 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Feb  2 13:08:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:15 np0005605476 nova_compute[239846]: 2026-02-02 18:08:15.870 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:20 np0005605476 nova_compute[239846]: 2026-02-02 18:08:20.871 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:21 np0005605476 nova_compute[239846]: 2026-02-02 18:08:21.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:25 np0005605476 nova_compute[239846]: 2026-02-02 18:08:25.872 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.195533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709195616, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3525461, "memory_usage": 3579232, "flush_reason": "Manual Compaction"}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709216353, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3458262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36851, "largest_seqno": 38895, "table_properties": {"data_size": 3448831, "index_size": 5989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18534, "raw_average_key_size": 20, "raw_value_size": 3430275, "raw_average_value_size": 3712, "num_data_blocks": 266, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055477, "oldest_key_time": 1770055477, "file_creation_time": 1770055709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 20875 microseconds, and 8875 cpu microseconds.
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.216420) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3458262 bytes OK
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.216448) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.217952) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.217982) EVENT_LOG_v1 {"time_micros": 1770055709217972, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.218016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3516919, prev total WAL file size 3516919, number of live WAL files 2.
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.219288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3377KB)], [77(9922KB)]
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709219355, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13619078, "oldest_snapshot_seqno": -1}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7210 keys, 11869103 bytes, temperature: kUnknown
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709280256, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11869103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11815034, "index_size": 34949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 181741, "raw_average_key_size": 25, "raw_value_size": 11679901, "raw_average_value_size": 1619, "num_data_blocks": 1391, "num_entries": 7210, "num_filter_entries": 7210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.280517) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11869103 bytes
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.282229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.3 rd, 194.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.7 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 7724, records dropped: 514 output_compression: NoCompression
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.282248) EVENT_LOG_v1 {"time_micros": 1770055709282239, "job": 44, "event": "compaction_finished", "compaction_time_micros": 60980, "compaction_time_cpu_micros": 24131, "output_level": 6, "num_output_files": 1, "total_output_size": 11869103, "num_input_records": 7724, "num_output_records": 7210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709282935, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055709284410, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.219168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.284584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.284593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.284596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.284600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:08:29.284603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:08:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:30 np0005605476 nova_compute[239846]: 2026-02-02 18:08:30.873 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:30 np0005605476 nova_compute[239846]: 2026-02-02 18:08:30.875 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:35 np0005605476 nova_compute[239846]: 2026-02-02 18:08:35.875 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:08:36
Feb  2 13:08:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:08:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:08:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Feb  2 13:08:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:08:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:38 np0005605476 nova_compute[239846]: 2026-02-02 18:08:38.444 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:40 np0005605476 nova_compute[239846]: 2026-02-02 18:08:40.876 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:41 np0005605476 podman[284801]: 2026-02-02 18:08:41.598940718 +0000 UTC m=+0.046941488 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 13:08:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:42 np0005605476 nova_compute[239846]: 2026-02-02 18:08:42.258 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:44 np0005605476 podman[284820]: 2026-02-02 18:08:44.659756184 +0000 UTC m=+0.103229814 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 13:08:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:45 np0005605476 nova_compute[239846]: 2026-02-02 18:08:45.878 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:08:46.660 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:08:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:08:46.661 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:08:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:08:46.661 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:08:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:08:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:08:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:50 np0005605476 nova_compute[239846]: 2026-02-02 18:08:50.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.272475176 +0000 UTC m=+0.049784961 container create d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:08:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:08:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:50 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:08:50 np0005605476 systemd[1]: Started libpod-conmon-d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc.scope.
Feb  2 13:08:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.349729453 +0000 UTC m=+0.127039258 container init d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.255543119 +0000 UTC m=+0.032852914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.359133703 +0000 UTC m=+0.136443488 container start d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.362227652 +0000 UTC m=+0.139537447 container attach d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 13:08:50 np0005605476 pedantic_chatterjee[285005]: 167 167
Feb  2 13:08:50 np0005605476 systemd[1]: libpod-d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc.scope: Deactivated successfully.
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.366624808 +0000 UTC m=+0.143934593 container died d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 13:08:50 np0005605476 systemd[1]: var-lib-containers-storage-overlay-5d95be9d486ffc26e072e079809249505d6fc01a27fa4a6482eb98361c9fc039-merged.mount: Deactivated successfully.
Feb  2 13:08:50 np0005605476 podman[284989]: 2026-02-02 18:08:50.403530528 +0000 UTC m=+0.180840303 container remove d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 13:08:50 np0005605476 systemd[1]: libpod-conmon-d289a7f38eb75c3d407a34a00daf58e1a0d75d53a58babc89304bbf907fbcbfc.scope: Deactivated successfully.
Feb  2 13:08:50 np0005605476 podman[285029]: 2026-02-02 18:08:50.554998866 +0000 UTC m=+0.047885576 container create 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:08:50 np0005605476 systemd[1]: Started libpod-conmon-80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10.scope.
Feb  2 13:08:50 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:50 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:50 np0005605476 podman[285029]: 2026-02-02 18:08:50.536187886 +0000 UTC m=+0.029074636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:50 np0005605476 podman[285029]: 2026-02-02 18:08:50.65965395 +0000 UTC m=+0.152540660 container init 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 13:08:50 np0005605476 podman[285029]: 2026-02-02 18:08:50.667241738 +0000 UTC m=+0.160128458 container start 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:08:50 np0005605476 podman[285029]: 2026-02-02 18:08:50.671197502 +0000 UTC m=+0.164084222 container attach 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 13:08:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:50 np0005605476 nova_compute[239846]: 2026-02-02 18:08:50.879 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:51 np0005605476 trusting_borg[285046]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:08:51 np0005605476 trusting_borg[285046]: --> All data devices are unavailable
Feb  2 13:08:51 np0005605476 systemd[1]: libpod-80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10.scope: Deactivated successfully.
Feb  2 13:08:51 np0005605476 podman[285029]: 2026-02-02 18:08:51.095326727 +0000 UTC m=+0.588213427 container died 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 13:08:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-db3a5da6f1b992737e5c297885c22ccbd32dcb712fb43a1746e4abe2b66db2ee-merged.mount: Deactivated successfully.
Feb  2 13:08:51 np0005605476 podman[285029]: 2026-02-02 18:08:51.197656075 +0000 UTC m=+0.690542775 container remove 80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:08:51 np0005605476 systemd[1]: libpod-conmon-80e5229538d8f9f8bf1e44a86b21eba03306cf3d7f0c3dad89903b2b37f85b10.scope: Deactivated successfully.
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.277 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.277 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.278 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.278 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.279 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.657955058 +0000 UTC m=+0.052092206 container create 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 13:08:51 np0005605476 systemd[1]: Started libpod-conmon-2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1.scope.
Feb  2 13:08:51 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.732002424 +0000 UTC m=+0.126139612 container init 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.639960222 +0000 UTC m=+0.034097390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.73847295 +0000 UTC m=+0.132610098 container start 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.743624188 +0000 UTC m=+0.137761466 container attach 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 13:08:51 np0005605476 silly_bohr[285177]: 167 167
Feb  2 13:08:51 np0005605476 systemd[1]: libpod-2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1.scope: Deactivated successfully.
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.748620631 +0000 UTC m=+0.142757789 container died 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 13:08:51 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d17028d9fe3909f1e060645b3d08d56fc9a602560a979e804973900101170ca2-merged.mount: Deactivated successfully.
Feb  2 13:08:51 np0005605476 podman[285161]: 2026-02-02 18:08:51.780496856 +0000 UTC m=+0.174634004 container remove 2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bohr, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:08:51 np0005605476 systemd[1]: libpod-conmon-2ab9b6da5cd35272accd5e1093ca35d0948c3d45b44a931977289eaafe3d95e1.scope: Deactivated successfully.
Feb  2 13:08:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:08:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202639320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:08:51 np0005605476 nova_compute[239846]: 2026-02-02 18:08:51.858 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:08:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:51 np0005605476 podman[285202]: 2026-02-02 18:08:51.938158872 +0000 UTC m=+0.051162210 container create 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:08:51 np0005605476 systemd[1]: Started libpod-conmon-8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7.scope.
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:51.914154943 +0000 UTC m=+0.027158301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e2290a432799f4e9bcf6a5181e4d4cdddb351155b79c182bc3c8696f000d25a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e2290a432799f4e9bcf6a5181e4d4cdddb351155b79c182bc3c8696f000d25a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e2290a432799f4e9bcf6a5181e4d4cdddb351155b79c182bc3c8696f000d25a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:52 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e2290a432799f4e9bcf6a5181e4d4cdddb351155b79c182bc3c8696f000d25a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:52.029820673 +0000 UTC m=+0.142824031 container init 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:52.035929709 +0000 UTC m=+0.148933037 container start 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.037 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:52.03910929 +0000 UTC m=+0.152112628 container attach 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.039 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4184MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.039 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.040 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.140 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.141 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.216 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:08:52 np0005605476 crazy_carver[285218]: {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    "0": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "devices": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "/dev/loop3"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            ],
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_name": "ceph_lv0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_size": "21470642176",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "name": "ceph_lv0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "tags": {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_name": "ceph",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.crush_device_class": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.encrypted": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.objectstore": "bluestore",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_id": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.vdo": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.with_tpm": "0"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            },
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "vg_name": "ceph_vg0"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        }
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    ],
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    "1": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "devices": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "/dev/loop4"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            ],
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_name": "ceph_lv1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_size": "21470642176",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "name": "ceph_lv1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "tags": {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_name": "ceph",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.crush_device_class": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.encrypted": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.objectstore": "bluestore",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_id": "1",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.vdo": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.with_tpm": "0"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            },
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "vg_name": "ceph_vg1"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        }
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    ],
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    "2": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "devices": [
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "/dev/loop5"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            ],
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_name": "ceph_lv2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_size": "21470642176",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "name": "ceph_lv2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "tags": {
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.cluster_name": "ceph",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.crush_device_class": "",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.encrypted": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.objectstore": "bluestore",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osd_id": "2",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.vdo": "0",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:                "ceph.with_tpm": "0"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            },
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "type": "block",
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:            "vg_name": "ceph_vg2"
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:        }
Feb  2 13:08:52 np0005605476 crazy_carver[285218]:    ]
Feb  2 13:08:52 np0005605476 crazy_carver[285218]: }
Feb  2 13:08:52 np0005605476 systemd[1]: libpod-8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7.scope: Deactivated successfully.
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:52.375900158 +0000 UTC m=+0.488903496 container died 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:08:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-6e2290a432799f4e9bcf6a5181e4d4cdddb351155b79c182bc3c8696f000d25a-merged.mount: Deactivated successfully.
Feb  2 13:08:52 np0005605476 podman[285202]: 2026-02-02 18:08:52.419748817 +0000 UTC m=+0.532752155 container remove 8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_carver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:08:52 np0005605476 systemd[1]: libpod-conmon-8b6b1937959ab3485602978e21050c20fd20fe1b18375b2e6e16e1b2353707c7.scope: Deactivated successfully.
Feb  2 13:08:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:08:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959719061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.776 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.782 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.799 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.801 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:08:52 np0005605476 nova_compute[239846]: 2026-02-02 18:08:52.802 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.875816919 +0000 UTC m=+0.066256693 container create 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 13:08:52 np0005605476 systemd[1]: Started libpod-conmon-090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759.scope.
Feb  2 13:08:52 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.830810797 +0000 UTC m=+0.021250571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.932572109 +0000 UTC m=+0.123011903 container init 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.940315921 +0000 UTC m=+0.130755735 container start 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.944095239 +0000 UTC m=+0.134535033 container attach 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:08:52 np0005605476 amazing_yonath[285338]: 167 167
Feb  2 13:08:52 np0005605476 systemd[1]: libpod-090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759.scope: Deactivated successfully.
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.946842738 +0000 UTC m=+0.137282542 container died 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:08:52 np0005605476 systemd[1]: var-lib-containers-storage-overlay-fd81a9b55e6c88fec78ef56c78df1b55fdd966849ffb4e583a185a3df6c83af5-merged.mount: Deactivated successfully.
Feb  2 13:08:52 np0005605476 podman[285322]: 2026-02-02 18:08:52.985653542 +0000 UTC m=+0.176093336 container remove 090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_yonath, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:08:52 np0005605476 systemd[1]: libpod-conmon-090aa1b2d3a6e6d982901bb13713c6898bf141ef5119db633f1f67ed4c25d759.scope: Deactivated successfully.
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.115976804 +0000 UTC m=+0.040428862 container create 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:08:53 np0005605476 systemd[1]: Started libpod-conmon-43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5.scope.
Feb  2 13:08:53 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:08:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7c835e339048dd30fa6b985c040d681b91a5ff93c16ad8e409def49f92748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7c835e339048dd30fa6b985c040d681b91a5ff93c16ad8e409def49f92748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7c835e339048dd30fa6b985c040d681b91a5ff93c16ad8e409def49f92748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:53 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7c835e339048dd30fa6b985c040d681b91a5ff93c16ad8e409def49f92748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.09981757 +0000 UTC m=+0.024269628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.210094535 +0000 UTC m=+0.134546593 container init 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.216503779 +0000 UTC m=+0.140955807 container start 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.2196722 +0000 UTC m=+0.144124258 container attach 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 13:08:53 np0005605476 nova_compute[239846]: 2026-02-02 18:08:53.797 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:53 np0005605476 lvm[285460]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:08:53 np0005605476 lvm[285456]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:08:53 np0005605476 lvm[285456]: VG ceph_vg0 finished
Feb  2 13:08:53 np0005605476 lvm[285460]: VG ceph_vg2 finished
Feb  2 13:08:53 np0005605476 lvm[285459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:08:53 np0005605476 lvm[285459]: VG ceph_vg1 finished
Feb  2 13:08:53 np0005605476 peaceful_wright[285379]: {}
Feb  2 13:08:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:53 np0005605476 systemd[1]: libpod-43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5.scope: Deactivated successfully.
Feb  2 13:08:53 np0005605476 systemd[1]: libpod-43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5.scope: Consumed 1.052s CPU time.
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.938050761 +0000 UTC m=+0.862502799 container died 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:08:53 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b8a7c835e339048dd30fa6b985c040d681b91a5ff93c16ad8e409def49f92748-merged.mount: Deactivated successfully.
Feb  2 13:08:53 np0005605476 podman[285362]: 2026-02-02 18:08:53.982566359 +0000 UTC m=+0.907018387 container remove 43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 13:08:53 np0005605476 systemd[1]: libpod-conmon-43d4e245ac1683cedaaf1fbe92f02e2b2a52f0c41db8b6f66a3cac6248d8bea5.scope: Deactivated successfully.
Feb  2 13:08:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:08:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:08:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:08:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:08:55 np0005605476 nova_compute[239846]: 2026-02-02 18:08:55.881 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:08:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:56 np0005605476 nova_compute[239846]: 2026-02-02 18:08:56.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:56 np0005605476 nova_compute[239846]: 2026-02-02 18:08:56.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:08:56 np0005605476 nova_compute[239846]: 2026-02-02 18:08:56.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:08:56 np0005605476 nova_compute[239846]: 2026-02-02 18:08:56.278 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:08:56 np0005605476 nova_compute[239846]: 2026-02-02 18:08:56.279 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:08:58 np0005605476 nova_compute[239846]: 2026-02-02 18:08:58.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:08:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:00 np0005605476 nova_compute[239846]: 2026-02-02 18:09:00.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:00 np0005605476 nova_compute[239846]: 2026-02-02 18:09:00.882 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:00 np0005605476 nova_compute[239846]: 2026-02-02 18:09:00.887 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:02 np0005605476 nova_compute[239846]: 2026-02-02 18:09:02.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:02 np0005605476 nova_compute[239846]: 2026-02-02 18:09:02.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:09:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:09:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3289197127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:09:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:09:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3289197127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:09:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:05 np0005605476 nova_compute[239846]: 2026-02-02 18:09:05.883 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:05 np0005605476 nova_compute[239846]: 2026-02-02 18:09:05.888 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:10 np0005605476 nova_compute[239846]: 2026-02-02 18:09:10.884 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:10 np0005605476 nova_compute[239846]: 2026-02-02 18:09:10.888 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:12 np0005605476 podman[285500]: 2026-02-02 18:09:12.617247595 +0000 UTC m=+0.060219700 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 13:09:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:15 np0005605476 podman[285521]: 2026-02-02 18:09:15.674963551 +0000 UTC m=+0.126298037 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb  2 13:09:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:15 np0005605476 nova_compute[239846]: 2026-02-02 18:09:15.886 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:15 np0005605476 nova_compute[239846]: 2026-02-02 18:09:15.889 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:20 np0005605476 nova_compute[239846]: 2026-02-02 18:09:20.887 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:20 np0005605476 nova_compute[239846]: 2026-02-02 18:09:20.891 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:25 np0005605476 nova_compute[239846]: 2026-02-02 18:09:25.889 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:25 np0005605476 nova_compute[239846]: 2026-02-02 18:09:25.891 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:30 np0005605476 nova_compute[239846]: 2026-02-02 18:09:30.892 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:09:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.893 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.894 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.894 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.894 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.895 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:09:35 np0005605476 nova_compute[239846]: 2026-02-02 18:09:35.896 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:09:36
Feb  2 13:09:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:09:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:09:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'volumes', 'images', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Feb  2 13:09:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:09:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:40 np0005605476 nova_compute[239846]: 2026-02-02 18:09:40.896 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:43 np0005605476 podman[285547]: 2026-02-02 18:09:43.60175115 +0000 UTC m=+0.046392032 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 13:09:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:44 np0005605476 nova_compute[239846]: 2026-02-02 18:09:44.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.898 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.900 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.900 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.900 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.918 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:45 np0005605476 nova_compute[239846]: 2026-02-02 18:09:45.919 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:09:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:46 np0005605476 podman[285566]: 2026-02-02 18:09:46.657252511 +0000 UTC m=+0.105475049 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 13:09:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:09:46.661 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:09:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:09:46.661 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:09:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:09:46.661 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:09:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:50 np0005605476 nova_compute[239846]: 2026-02-02 18:09:50.920 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:09:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.353 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.354 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.354 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.354 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.354 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:09:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:09:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860632020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:09:52 np0005605476 nova_compute[239846]: 2026-02-02 18:09:52.906 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.041 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.043 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4241MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.043 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.044 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.107 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.107 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.126 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:09:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:09:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291336559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.663 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.668 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.683 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.685 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:09:53 np0005605476 nova_compute[239846]: 2026-02-02 18:09:53.685 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:09:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:09:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.053903807 +0000 UTC m=+0.044226967 container create c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:09:55 np0005605476 systemd[1]: Started libpod-conmon-c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21.scope.
Feb  2 13:09:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.121318976 +0000 UTC m=+0.111642156 container init c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.032624647 +0000 UTC m=+0.022947827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.129127906 +0000 UTC m=+0.119451056 container start c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.131984346 +0000 UTC m=+0.122307526 container attach c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:09:55 np0005605476 heuristic_neumann[285795]: 167 167
Feb  2 13:09:55 np0005605476 systemd[1]: libpod-c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21.scope: Deactivated successfully.
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.136571626 +0000 UTC m=+0.126894776 container died c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 13:09:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-69262ff0bb5ff7758d984d9f2c2ba4ab2fab13b8a1e44cbc08b15dca9db84bf6-merged.mount: Deactivated successfully.
Feb  2 13:09:55 np0005605476 podman[285778]: 2026-02-02 18:09:55.172514698 +0000 UTC m=+0.162837858 container remove c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 13:09:55 np0005605476 systemd[1]: libpod-conmon-c55d99184f8187550d00b15dc027792c864291579d6335f755e79c54a0120b21.scope: Deactivated successfully.
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.301974715 +0000 UTC m=+0.038442624 container create 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:09:55 np0005605476 systemd[1]: Started libpod-conmon-928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d.scope.
Feb  2 13:09:55 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:55 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.285523222 +0000 UTC m=+0.021991161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.385851548 +0000 UTC m=+0.122319477 container init 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.396243561 +0000 UTC m=+0.132711460 container start 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.399791131 +0000 UTC m=+0.136259040 container attach 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:09:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:09:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:55 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:09:55 np0005605476 nova_compute[239846]: 2026-02-02 18:09:55.681 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:55 np0005605476 pedantic_matsumoto[285835]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:09:55 np0005605476 pedantic_matsumoto[285835]: --> All data devices are unavailable
Feb  2 13:09:55 np0005605476 systemd[1]: libpod-928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d.scope: Deactivated successfully.
Feb  2 13:09:55 np0005605476 conmon[285835]: conmon 928f1c862467a95e2740 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d.scope/container/memory.events
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.791326091 +0000 UTC m=+0.527794020 container died 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 13:09:55 np0005605476 systemd[1]: var-lib-containers-storage-overlay-65143d78833149b1a2725876c3cf6d1508a6fbad3b19eb4f308a66fd94af9e47-merged.mount: Deactivated successfully.
Feb  2 13:09:55 np0005605476 podman[285818]: 2026-02-02 18:09:55.829603839 +0000 UTC m=+0.566071748 container remove 928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 13:09:55 np0005605476 systemd[1]: libpod-conmon-928f1c862467a95e27404288905c603ef7519d7fac70f53d65362bdf8821b71d.scope: Deactivated successfully.
Feb  2 13:09:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:09:55 np0005605476 nova_compute[239846]: 2026-02-02 18:09:55.921 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:09:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.241932845 +0000 UTC m=+0.039114343 container create 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:09:56 np0005605476 nova_compute[239846]: 2026-02-02 18:09:56.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:56 np0005605476 nova_compute[239846]: 2026-02-02 18:09:56.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:09:56 np0005605476 nova_compute[239846]: 2026-02-02 18:09:56.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:09:56 np0005605476 nova_compute[239846]: 2026-02-02 18:09:56.258 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:09:56 np0005605476 systemd[1]: Started libpod-conmon-7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42.scope.
Feb  2 13:09:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.223832975 +0000 UTC m=+0.021014513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.321743713 +0000 UTC m=+0.118925251 container init 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.328304958 +0000 UTC m=+0.125486446 container start 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.331715674 +0000 UTC m=+0.128897192 container attach 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:09:56 np0005605476 beautiful_euclid[285945]: 167 167
Feb  2 13:09:56 np0005605476 systemd[1]: libpod-7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42.scope: Deactivated successfully.
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.335565233 +0000 UTC m=+0.132746731 container died 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:09:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c2c3564e6a743e2236119afe0902d040434457247a569b4f0e67875135b36da5-merged.mount: Deactivated successfully.
Feb  2 13:09:56 np0005605476 podman[285929]: 2026-02-02 18:09:56.371111444 +0000 UTC m=+0.168292942 container remove 7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 13:09:56 np0005605476 systemd[1]: libpod-conmon-7b05e1e05718698425ac4375ae6f4b2ceb0805fa6907a36bde36f560c8d71b42.scope: Deactivated successfully.
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.497008371 +0000 UTC m=+0.040099671 container create d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:09:56 np0005605476 systemd[1]: Started libpod-conmon-d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce.scope.
Feb  2 13:09:56 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282738a9656ff99522c0b78d72e32c1684206302f4e8ed86effac02ce4d98edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282738a9656ff99522c0b78d72e32c1684206302f4e8ed86effac02ce4d98edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282738a9656ff99522c0b78d72e32c1684206302f4e8ed86effac02ce4d98edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:56 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282738a9656ff99522c0b78d72e32c1684206302f4e8ed86effac02ce4d98edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.481522774 +0000 UTC m=+0.024614094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.593438577 +0000 UTC m=+0.136529957 container init d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.599138528 +0000 UTC m=+0.142229828 container start d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.603589333 +0000 UTC m=+0.146680653 container attach d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:09:56 np0005605476 boring_lewin[285986]: {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    "0": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "devices": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "/dev/loop3"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            ],
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_name": "ceph_lv0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_size": "21470642176",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "name": "ceph_lv0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "tags": {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_name": "ceph",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.crush_device_class": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.encrypted": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.objectstore": "bluestore",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_id": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.vdo": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.with_tpm": "0"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            },
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "vg_name": "ceph_vg0"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        }
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    ],
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    "1": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "devices": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "/dev/loop4"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            ],
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_name": "ceph_lv1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_size": "21470642176",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "name": "ceph_lv1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "tags": {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_name": "ceph",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.crush_device_class": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.encrypted": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.objectstore": "bluestore",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_id": "1",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.vdo": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.with_tpm": "0"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            },
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "vg_name": "ceph_vg1"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        }
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    ],
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    "2": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "devices": [
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "/dev/loop5"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            ],
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_name": "ceph_lv2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_size": "21470642176",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "name": "ceph_lv2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "tags": {
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.cluster_name": "ceph",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.crush_device_class": "",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.encrypted": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.objectstore": "bluestore",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osd_id": "2",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.vdo": "0",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:                "ceph.with_tpm": "0"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            },
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "type": "block",
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:            "vg_name": "ceph_vg2"
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:        }
Feb  2 13:09:56 np0005605476 boring_lewin[285986]:    ]
Feb  2 13:09:56 np0005605476 boring_lewin[285986]: }
Feb  2 13:09:56 np0005605476 systemd[1]: libpod-d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce.scope: Deactivated successfully.
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.872635542 +0000 UTC m=+0.415726872 container died d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:09:56 np0005605476 systemd[1]: var-lib-containers-storage-overlay-282738a9656ff99522c0b78d72e32c1684206302f4e8ed86effac02ce4d98edb-merged.mount: Deactivated successfully.
Feb  2 13:09:56 np0005605476 podman[285970]: 2026-02-02 18:09:56.914682317 +0000 UTC m=+0.457773617 container remove d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lewin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:09:56 np0005605476 systemd[1]: libpod-conmon-d2d895901e57756506cd01e60506dd8cba0ddd9c21bce78c80199afbef6b53ce.scope: Deactivated successfully.
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.359040405 +0000 UTC m=+0.045793201 container create 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb  2 13:09:57 np0005605476 systemd[1]: Started libpod-conmon-5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721.scope.
Feb  2 13:09:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.340389309 +0000 UTC m=+0.027142165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.436506007 +0000 UTC m=+0.123258833 container init 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.442556158 +0000 UTC m=+0.129308964 container start 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.44581989 +0000 UTC m=+0.132572716 container attach 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 13:09:57 np0005605476 eager_galileo[286086]: 167 167
Feb  2 13:09:57 np0005605476 systemd[1]: libpod-5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721.scope: Deactivated successfully.
Feb  2 13:09:57 np0005605476 conmon[286086]: conmon 5da3c9a2dde6a682bff2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721.scope/container/memory.events
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.449525314 +0000 UTC m=+0.136278140 container died 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:09:57 np0005605476 systemd[1]: var-lib-containers-storage-overlay-f0106021b81023da2f0cfc688aa547ddad17ceed9a8308e901f3e6159004b916-merged.mount: Deactivated successfully.
Feb  2 13:09:57 np0005605476 podman[286070]: 2026-02-02 18:09:57.483184022 +0000 UTC m=+0.169936828 container remove 5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galileo, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:09:57 np0005605476 systemd[1]: libpod-conmon-5da3c9a2dde6a682bff2ad835cd4aa1758731d4291e6449449108a2c46a0a721.scope: Deactivated successfully.
Feb  2 13:09:57 np0005605476 podman[286111]: 2026-02-02 18:09:57.613118913 +0000 UTC m=+0.045271877 container create 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:09:57 np0005605476 systemd[1]: Started libpod-conmon-26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965.scope.
Feb  2 13:09:57 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:09:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e649125ed79dd3ad692e407c29a9792d79ecf94c7230b7cbd4aa89b0ccf35c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e649125ed79dd3ad692e407c29a9792d79ecf94c7230b7cbd4aa89b0ccf35c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e649125ed79dd3ad692e407c29a9792d79ecf94c7230b7cbd4aa89b0ccf35c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:57 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e649125ed79dd3ad692e407c29a9792d79ecf94c7230b7cbd4aa89b0ccf35c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:09:57 np0005605476 podman[286111]: 2026-02-02 18:09:57.591611807 +0000 UTC m=+0.023764861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:09:57 np0005605476 podman[286111]: 2026-02-02 18:09:57.691734797 +0000 UTC m=+0.123887791 container init 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 13:09:57 np0005605476 podman[286111]: 2026-02-02 18:09:57.69750071 +0000 UTC m=+0.129653664 container start 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:09:57 np0005605476 podman[286111]: 2026-02-02 18:09:57.700951647 +0000 UTC m=+0.133104651 container attach 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:09:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:09:58 np0005605476 nova_compute[239846]: 2026-02-02 18:09:58.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:58 np0005605476 lvm[286207]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:09:58 np0005605476 lvm[286206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:09:58 np0005605476 lvm[286207]: VG ceph_vg1 finished
Feb  2 13:09:58 np0005605476 lvm[286206]: VG ceph_vg0 finished
Feb  2 13:09:58 np0005605476 lvm[286209]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:09:58 np0005605476 lvm[286209]: VG ceph_vg2 finished
Feb  2 13:09:58 np0005605476 lvm[286211]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:09:58 np0005605476 lvm[286211]: VG ceph_vg2 finished
Feb  2 13:09:58 np0005605476 eager_cerf[286128]: {}
Feb  2 13:09:58 np0005605476 systemd[1]: libpod-26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965.scope: Deactivated successfully.
Feb  2 13:09:58 np0005605476 systemd[1]: libpod-26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965.scope: Consumed 1.177s CPU time.
Feb  2 13:09:58 np0005605476 podman[286111]: 2026-02-02 18:09:58.52286313 +0000 UTC m=+0.955016084 container died 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:09:58 np0005605476 systemd[1]: var-lib-containers-storage-overlay-13e649125ed79dd3ad692e407c29a9792d79ecf94c7230b7cbd4aa89b0ccf35c-merged.mount: Deactivated successfully.
Feb  2 13:09:58 np0005605476 podman[286111]: 2026-02-02 18:09:58.564689158 +0000 UTC m=+0.996842112 container remove 26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 13:09:58 np0005605476 systemd[1]: libpod-conmon-26449fcf1f489c8da22a0dd1f011d36f6afe0c4ba30e29597b8c09ee15335965.scope: Deactivated successfully.
Feb  2 13:09:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:09:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:09:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:59 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:09:59 np0005605476 nova_compute[239846]: 2026-02-02 18:09:59.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:09:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:00 np0005605476 nova_compute[239846]: 2026-02-02 18:10:00.238 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:00 np0005605476 nova_compute[239846]: 2026-02-02 18:10:00.922 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:01 np0005605476 nova_compute[239846]: 2026-02-02 18:10:01.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:04 np0005605476 nova_compute[239846]: 2026-02-02 18:10:04.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:04 np0005605476 nova_compute[239846]: 2026-02-02 18:10:04.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:10:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:10:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3365854692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:10:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:10:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3365854692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:10:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:05 np0005605476 nova_compute[239846]: 2026-02-02 18:10:05.924 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:07 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:09 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:10 np0005605476 nova_compute[239846]: 2026-02-02 18:10:10.925 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:11 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:13 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:14 np0005605476 podman[286250]: 2026-02-02 18:10:14.637421602 +0000 UTC m=+0.080560801 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 13:10:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:15 np0005605476 nova_compute[239846]: 2026-02-02 18:10:15.926 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:15 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:17 np0005605476 podman[286270]: 2026-02-02 18:10:17.625473298 +0000 UTC m=+0.072832753 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 13:10:17 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:10:18 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8730 writes, 39K keys, 8730 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8730 writes, 8730 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1389 writes, 6344 keys, 1389 commit groups, 1.0 writes per commit group, ingest: 8.98 MB, 0.01 MB/s#012Interval WAL: 1389 writes, 1389 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     87.3      0.53              0.11        22    0.024       0      0       0.0       0.0#012  L6      1/0   11.32 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0    200.3    168.3      1.09              0.42        21    0.052    118K    12K       0.0       0.0#012 Sum      1/0   11.32 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0    134.9    141.8      1.62              0.53        43    0.038    118K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    199.8    204.1      0.30              0.14        10    0.030     36K   2560       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    200.3    168.3      1.09              0.42        21    0.052    118K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.7      0.53              0.11        21    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.9      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.08 MB/s write, 0.21 GB read, 0.07 MB/s read, 1.6 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9805658d0#2 capacity: 304.00 MB usage: 26.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000335 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1929,25.34 MB,8.33499%) FilterBlock(44,345.05 KB,0.110842%) IndexBlock(44,658.67 KB,0.21159%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 13:10:19 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:20 np0005605476 nova_compute[239846]: 2026-02-02 18:10:20.928 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:21 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:23 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:25 np0005605476 nova_compute[239846]: 2026-02-02 18:10:25.930 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:25 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:27 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:29 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.932 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.933 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.933 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.934 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.934 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:10:30 np0005605476 nova_compute[239846]: 2026-02-02 18:10:30.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:31 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:33 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:35 np0005605476 nova_compute[239846]: 2026-02-02 18:10:35.935 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:35 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:10:36
Feb  2 13:10:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:10:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:10:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta']
Feb  2 13:10:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:10:37 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:39 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:40 np0005605476 nova_compute[239846]: 2026-02-02 18:10:40.936 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:41 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:43 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:44 np0005605476 nova_compute[239846]: 2026-02-02 18:10:44.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:45 np0005605476 podman[286297]: 2026-02-02 18:10:45.620172801 +0000 UTC m=+0.074305964 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb  2 13:10:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:45 np0005605476 nova_compute[239846]: 2026-02-02 18:10:45.937 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:45 np0005605476 nova_compute[239846]: 2026-02-02 18:10:45.938 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:45 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:10:46.662 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:10:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:10:46.663 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:10:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:10:46.663 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:10:47 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:48 np0005605476 podman[286316]: 2026-02-02 18:10:48.635888127 +0000 UTC m=+0.089330228 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 13:10:49 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:50 np0005605476 nova_compute[239846]: 2026-02-02 18:10:50.939 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:50 np0005605476 nova_compute[239846]: 2026-02-02 18:10:50.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:10:50 np0005605476 nova_compute[239846]: 2026-02-02 18:10:50.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:10:50 np0005605476 nova_compute[239846]: 2026-02-02 18:10:50.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:10:50 np0005605476 nova_compute[239846]: 2026-02-02 18:10:50.940 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:10:51 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.271 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.271 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.271 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.271 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.272 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:10:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:10:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905785447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.822 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.936 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.937 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4237MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.938 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:10:53 np0005605476 nova_compute[239846]: 2026-02-02 18:10:53.938 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:10:53 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.132 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.133 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.195 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing inventories for resource provider a0b0d175-0948-46db-92ba-608ef43a689f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.399 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating ProviderTree inventory for provider a0b0d175-0948-46db-92ba-608ef43a689f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.400 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Updating inventory in ProviderTree for provider a0b0d175-0948-46db-92ba-608ef43a689f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.413 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing aggregate associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.431 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Refreshing trait associations for resource provider a0b0d175-0948-46db-92ba-608ef43a689f, traits: COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 13:10:54 np0005605476 nova_compute[239846]: 2026-02-02 18:10:54.459 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:10:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:10:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387557905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.009 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.014 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.037 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.039 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.039 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:10:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:10:55 np0005605476 nova_compute[239846]: 2026-02-02 18:10:55.941 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:10:55 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:56 np0005605476 nova_compute[239846]: 2026-02-02 18:10:56.035 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:56 np0005605476 nova_compute[239846]: 2026-02-02 18:10:56.035 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:57 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:10:58 np0005605476 nova_compute[239846]: 2026-02-02 18:10:58.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:58 np0005605476 nova_compute[239846]: 2026-02-02 18:10:58.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:10:58 np0005605476 nova_compute[239846]: 2026-02-02 18:10:58.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:10:58 np0005605476 nova_compute[239846]: 2026-02-02 18:10:58.264 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:10:59 np0005605476 nova_compute[239846]: 2026-02-02 18:10:59.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:10:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.639602011 +0000 UTC m=+0.032990070 container create 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:10:59 np0005605476 systemd[1]: Started libpod-conmon-93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a.scope.
Feb  2 13:10:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.713129872 +0000 UTC m=+0.106517961 container init 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.719157262 +0000 UTC m=+0.112545321 container start 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.624922667 +0000 UTC m=+0.018310746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.722373853 +0000 UTC m=+0.115761932 container attach 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb  2 13:10:59 np0005605476 tender_kilby[286546]: 167 167
Feb  2 13:10:59 np0005605476 systemd[1]: libpod-93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a.scope: Deactivated successfully.
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.725772488 +0000 UTC m=+0.119160557 container died 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:10:59 np0005605476 systemd[1]: var-lib-containers-storage-overlay-c1887c8967a8182750c2fb2209c6d2559a8b0adc1caef7ab1f8b5b24768be658-merged.mount: Deactivated successfully.
Feb  2 13:10:59 np0005605476 podman[286529]: 2026-02-02 18:10:59.759853839 +0000 UTC m=+0.153241898 container remove 93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 13:10:59 np0005605476 systemd[1]: libpod-conmon-93a4f04aece553ec4c3c6a1e99aa6471d100711c8b2455411e9e258b985a714a.scope: Deactivated successfully.
Feb  2 13:10:59 np0005605476 podman[286571]: 2026-02-02 18:10:59.884013986 +0000 UTC m=+0.034842332 container create 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:10:59 np0005605476 systemd[1]: Started libpod-conmon-9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f.scope.
Feb  2 13:10:59 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:10:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:10:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:10:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:10:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:10:59 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:10:59 np0005605476 podman[286571]: 2026-02-02 18:10:59.950352145 +0000 UTC m=+0.101180511 container init 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:10:59 np0005605476 podman[286571]: 2026-02-02 18:10:59.870001341 +0000 UTC m=+0.020829707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:10:59 np0005605476 podman[286571]: 2026-02-02 18:10:59.970982646 +0000 UTC m=+0.121810992 container start 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:10:59 np0005605476 podman[286571]: 2026-02-02 18:10:59.974127815 +0000 UTC m=+0.124956191 container attach 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:10:59 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:11:00 np0005605476 nova_compute[239846]: 2026-02-02 18:11:00.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:00 np0005605476 exciting_gates[286588]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:11:00 np0005605476 exciting_gates[286588]: --> All data devices are unavailable
Feb  2 13:11:00 np0005605476 systemd[1]: libpod-9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f.scope: Deactivated successfully.
Feb  2 13:11:00 np0005605476 conmon[286588]: conmon 9d69e857a4b932cd5973 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f.scope/container/memory.events
Feb  2 13:11:00 np0005605476 podman[286571]: 2026-02-02 18:11:00.413839132 +0000 UTC m=+0.564667478 container died 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 13:11:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-12866b7dabf0d3ea648a29f494810d9818547ccc9857137312d0a2e2af256a1a-merged.mount: Deactivated successfully.
Feb  2 13:11:00 np0005605476 podman[286571]: 2026-02-02 18:11:00.453895 +0000 UTC m=+0.604723346 container remove 9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 13:11:00 np0005605476 systemd[1]: libpod-conmon-9d69e857a4b932cd5973d3430b0f569d300cf906edbae9abb5c7b484f393348f.scope: Deactivated successfully.
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.856382519 +0000 UTC m=+0.039227426 container create d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:00 np0005605476 systemd[1]: Started libpod-conmon-d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe.scope.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.891080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860891141, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1438, "num_deletes": 255, "total_data_size": 2279398, "memory_usage": 2323920, "flush_reason": "Manual Compaction"}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860901650, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2246600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38896, "largest_seqno": 40333, "table_properties": {"data_size": 2239792, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13798, "raw_average_key_size": 19, "raw_value_size": 2226211, "raw_average_value_size": 3157, "num_data_blocks": 177, "num_entries": 705, "num_filter_entries": 705, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055710, "oldest_key_time": 1770055710, "file_creation_time": 1770055860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 10656 microseconds, and 4569 cpu microseconds.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.901733) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2246600 bytes OK
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.901775) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.903884) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.903933) EVENT_LOG_v1 {"time_micros": 1770055860903923, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.903962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2273037, prev total WAL file size 2273037, number of live WAL files 2.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.904725) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2193KB)], [80(11MB)]
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860904791, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14115703, "oldest_snapshot_seqno": -1}
Feb  2 13:11:00 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.92778622 +0000 UTC m=+0.110631157 container init d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.932813942 +0000 UTC m=+0.115658859 container start d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.840250154 +0000 UTC m=+0.023095091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:11:00 np0005605476 reverent_haslett[286699]: 167 167
Feb  2 13:11:00 np0005605476 systemd[1]: libpod-d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe.scope: Deactivated successfully.
Feb  2 13:11:00 np0005605476 conmon[286699]: conmon d4c85bf05f119fa0aa18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe.scope/container/memory.events
Feb  2 13:11:00 np0005605476 nova_compute[239846]: 2026-02-02 18:11:00.942 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7393 keys, 13954700 bytes, temperature: kUnknown
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860955239, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 13954700, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13896939, "index_size": 38193, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 186429, "raw_average_key_size": 25, "raw_value_size": 13756075, "raw_average_value_size": 1860, "num_data_blocks": 1529, "num_entries": 7393, "num_filter_entries": 7393, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.955626) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 13954700 bytes
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.956691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 279.1 rd, 276.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.3 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(12.5) write-amplify(6.2) OK, records in: 7915, records dropped: 522 output_compression: NoCompression
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.956714) EVENT_LOG_v1 {"time_micros": 1770055860956703, "job": 46, "event": "compaction_finished", "compaction_time_micros": 50568, "compaction_time_cpu_micros": 23922, "output_level": 6, "num_output_files": 1, "total_output_size": 13954700, "num_input_records": 7915, "num_output_records": 7393, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860957218, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055860959134, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.904631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.959267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.959275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.959278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.959280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:00.959282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.95651737 +0000 UTC m=+0.139362327 container attach d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.959680819 +0000 UTC m=+0.142525736 container died d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 13:11:00 np0005605476 systemd[1]: var-lib-containers-storage-overlay-3a18192bbdf213c449f8ed29e8729bfa369ce2896a6201fd9df5d87ca8e2d76b-merged.mount: Deactivated successfully.
Feb  2 13:11:00 np0005605476 podman[286683]: 2026-02-02 18:11:00.998007069 +0000 UTC m=+0.180852006 container remove d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 13:11:01 np0005605476 systemd[1]: libpod-conmon-d4c85bf05f119fa0aa1803be79c45392c6a0079901becf94347e013e7b1231fe.scope: Deactivated successfully.
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.13224655 +0000 UTC m=+0.041714736 container create f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 13:11:01 np0005605476 systemd[1]: Started libpod-conmon-f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5.scope.
Feb  2 13:11:01 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:11:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ebad82bcbaad7cb3b7d331f23167a3335fd3a318135d978fb2173bd682dfb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ebad82bcbaad7cb3b7d331f23167a3335fd3a318135d978fb2173bd682dfb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ebad82bcbaad7cb3b7d331f23167a3335fd3a318135d978fb2173bd682dfb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:01 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ebad82bcbaad7cb3b7d331f23167a3335fd3a318135d978fb2173bd682dfb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.113040649 +0000 UTC m=+0.022508855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.220183688 +0000 UTC m=+0.129651894 container init f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.226604448 +0000 UTC m=+0.136072634 container start f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.230251491 +0000 UTC m=+0.139719697 container attach f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:11:01 np0005605476 nova_compute[239846]: 2026-02-02 18:11:01.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]: {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    "0": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "devices": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "/dev/loop3"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            ],
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_name": "ceph_lv0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_size": "21470642176",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "name": "ceph_lv0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "tags": {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_name": "ceph",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.crush_device_class": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.encrypted": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.objectstore": "bluestore",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_id": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.vdo": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.with_tpm": "0"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            },
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "vg_name": "ceph_vg0"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        }
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    ],
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    "1": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "devices": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "/dev/loop4"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            ],
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_name": "ceph_lv1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_size": "21470642176",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "name": "ceph_lv1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "tags": {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_name": "ceph",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.crush_device_class": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.encrypted": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.objectstore": "bluestore",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_id": "1",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.vdo": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.with_tpm": "0"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            },
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "vg_name": "ceph_vg1"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        }
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    ],
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    "2": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "devices": [
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "/dev/loop5"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            ],
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_name": "ceph_lv2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_size": "21470642176",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "name": "ceph_lv2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "tags": {
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.cluster_name": "ceph",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.crush_device_class": "",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.encrypted": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.objectstore": "bluestore",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osd_id": "2",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.vdo": "0",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:                "ceph.with_tpm": "0"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            },
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "type": "block",
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:            "vg_name": "ceph_vg2"
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:        }
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]:    ]
Feb  2 13:11:01 np0005605476 elastic_rosalind[286739]: }
Feb  2 13:11:01 np0005605476 systemd[1]: libpod-f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5.scope: Deactivated successfully.
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.504379674 +0000 UTC m=+0.413847860 container died f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 13:11:01 np0005605476 systemd[1]: var-lib-containers-storage-overlay-09ebad82bcbaad7cb3b7d331f23167a3335fd3a318135d978fb2173bd682dfb6-merged.mount: Deactivated successfully.
Feb  2 13:11:01 np0005605476 podman[286723]: 2026-02-02 18:11:01.540735768 +0000 UTC m=+0.450203954 container remove f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:11:01 np0005605476 systemd[1]: libpod-conmon-f39d3307fee5aae31bec348e3fcfd023a46e8a6eff4d7d9b16e57ca914a8c6a5.scope: Deactivated successfully.
Feb  2 13:11:01 np0005605476 podman[286822]: 2026-02-02 18:11:01.982269356 +0000 UTC m=+0.039159984 container create e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:11:01 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:02 np0005605476 systemd[1]: Started libpod-conmon-e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a.scope.
Feb  2 13:11:02 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:02.052294939 +0000 UTC m=+0.109185627 container init e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:02.057134515 +0000 UTC m=+0.114025143 container start e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:02.060598283 +0000 UTC m=+0.117488921 container attach e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:01.965911265 +0000 UTC m=+0.022801913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:11:02 np0005605476 flamboyant_black[286839]: 167 167
Feb  2 13:11:02 np0005605476 systemd[1]: libpod-e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a.scope: Deactivated successfully.
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:02.062645741 +0000 UTC m=+0.119536379 container died e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 13:11:02 np0005605476 systemd[1]: var-lib-containers-storage-overlay-eff3cf14213338636363afada4c0201e620cca59f6c85b83d61a0e867521dcf0-merged.mount: Deactivated successfully.
Feb  2 13:11:02 np0005605476 podman[286822]: 2026-02-02 18:11:02.098357407 +0000 UTC m=+0.155248035 container remove e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_black, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 13:11:02 np0005605476 systemd[1]: libpod-conmon-e25f41bc3b5119da1a762a86e44757c47ac6b9814fdb82710f1bd81bfb14f62a.scope: Deactivated successfully.
Feb  2 13:11:02 np0005605476 podman[286863]: 2026-02-02 18:11:02.21881968 +0000 UTC m=+0.038860406 container create 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:11:02 np0005605476 systemd[1]: Started libpod-conmon-000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d.scope.
Feb  2 13:11:02 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:11:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b628c6fa94b1c745cf8e8b980c4190c05fab5e35d33e1a189c571d998b916525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b628c6fa94b1c745cf8e8b980c4190c05fab5e35d33e1a189c571d998b916525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b628c6fa94b1c745cf8e8b980c4190c05fab5e35d33e1a189c571d998b916525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:02 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b628c6fa94b1c745cf8e8b980c4190c05fab5e35d33e1a189c571d998b916525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:11:02 np0005605476 podman[286863]: 2026-02-02 18:11:02.295926012 +0000 UTC m=+0.115966768 container init 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:11:02 np0005605476 podman[286863]: 2026-02-02 18:11:02.199974469 +0000 UTC m=+0.020015215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:11:02 np0005605476 podman[286863]: 2026-02-02 18:11:02.301266403 +0000 UTC m=+0.121307129 container start 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:11:02 np0005605476 podman[286863]: 2026-02-02 18:11:02.304333599 +0000 UTC m=+0.124374325 container attach 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:11:02 np0005605476 lvm[286958]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:11:02 np0005605476 lvm[286958]: VG ceph_vg0 finished
Feb  2 13:11:02 np0005605476 lvm[286961]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:11:02 np0005605476 lvm[286961]: VG ceph_vg2 finished
Feb  2 13:11:02 np0005605476 lvm[286959]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:11:02 np0005605476 lvm[286959]: VG ceph_vg1 finished
Feb  2 13:11:03 np0005605476 amazing_albattani[286880]: {}
Feb  2 13:11:03 np0005605476 systemd[1]: libpod-000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d.scope: Deactivated successfully.
Feb  2 13:11:03 np0005605476 systemd[1]: libpod-000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d.scope: Consumed 1.050s CPU time.
Feb  2 13:11:03 np0005605476 podman[286863]: 2026-02-02 18:11:03.054627575 +0000 UTC m=+0.874668331 container died 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:11:03 np0005605476 systemd[1]: var-lib-containers-storage-overlay-b628c6fa94b1c745cf8e8b980c4190c05fab5e35d33e1a189c571d998b916525-merged.mount: Deactivated successfully.
Feb  2 13:11:03 np0005605476 podman[286863]: 2026-02-02 18:11:03.09812704 +0000 UTC m=+0.918167766 container remove 000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:11:03 np0005605476 systemd[1]: libpod-conmon-000077f27401d6bfc66116b5c866b00b28ff2867a4286cdc96430665ae36776d.scope: Deactivated successfully.
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:11:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:11:03 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:04 np0005605476 nova_compute[239846]: 2026-02-02 18:11:04.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:04 np0005605476 nova_compute[239846]: 2026-02-02 18:11:04.244 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:11:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:11:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1870912750' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:11:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:11:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1870912750' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:11:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:05 np0005605476 nova_compute[239846]: 2026-02-02 18:11:05.944 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:05 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:10 np0005605476 nova_compute[239846]: 2026-02-02 18:11:10.947 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:11:15 np0005605476 ceph-osd[85696]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 31K writes, 116K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 12K syncs, 2.62 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 835 writes, 2145 keys, 835 commit groups, 1.0 writes per commit group, ingest: 1.01 MB, 0.00 MB/s#012Interval WAL: 835 writes, 401 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.900197) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875900237, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 378, "num_deletes": 250, "total_data_size": 262242, "memory_usage": 268944, "flush_reason": "Manual Compaction"}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875903516, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 246493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40334, "largest_seqno": 40711, "table_properties": {"data_size": 244185, "index_size": 472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6266, "raw_average_key_size": 20, "raw_value_size": 239538, "raw_average_value_size": 782, "num_data_blocks": 20, "num_entries": 306, "num_filter_entries": 306, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055861, "oldest_key_time": 1770055861, "file_creation_time": 1770055875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3383 microseconds, and 1802 cpu microseconds.
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.903577) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 246493 bytes OK
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.903603) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.905516) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.905531) EVENT_LOG_v1 {"time_micros": 1770055875905526, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.905559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 259778, prev total WAL file size 259778, number of live WAL files 2.
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.905967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(240KB)], [83(13MB)]
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875906012, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 14201193, "oldest_snapshot_seqno": -1}
Feb  2 13:11:15 np0005605476 nova_compute[239846]: 2026-02-02 18:11:15.948 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7190 keys, 10864507 bytes, temperature: kUnknown
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875961953, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10864507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10812785, "index_size": 32666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 182397, "raw_average_key_size": 25, "raw_value_size": 10680110, "raw_average_value_size": 1485, "num_data_blocks": 1298, "num_entries": 7190, "num_filter_entries": 7190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.962259) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10864507 bytes
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.963399) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.3 rd, 193.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.3 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(101.7) write-amplify(44.1) OK, records in: 7699, records dropped: 509 output_compression: NoCompression
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.963421) EVENT_LOG_v1 {"time_micros": 1770055875963411, "job": 48, "event": "compaction_finished", "compaction_time_micros": 56068, "compaction_time_cpu_micros": 23024, "output_level": 6, "num_output_files": 1, "total_output_size": 10864507, "num_input_records": 7699, "num_output_records": 7190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875963611, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055875964731, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.905859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.964840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.964854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.964856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.964857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:15 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:11:15.964859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:11:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:16 np0005605476 podman[287001]: 2026-02-02 18:11:16.604977699 +0000 UTC m=+0.053190579 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 13:11:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:11:19 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 29K writes, 115K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 11K syncs, 2.68 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 619 writes, 1381 keys, 619 commit groups, 1.0 writes per commit group, ingest: 0.80 MB, 0.00 MB/s#012Interval WAL: 619 writes, 297 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:11:19 np0005605476 podman[287021]: 2026-02-02 18:11:19.616833776 +0000 UTC m=+0.063511330 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb  2 13:11:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:20 np0005605476 nova_compute[239846]: 2026-02-02 18:11:20.950 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:11:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:11:23 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 24K writes, 94K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 24K writes, 8994 syncs, 2.71 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 445 writes, 994 keys, 445 commit groups, 1.0 writes per commit group, ingest: 0.55 MB, 0.00 MB/s#012Interval WAL: 445 writes, 211 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:11:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:25 np0005605476 ceph-mgr[75493]: [devicehealth INFO root] Check health
Feb  2 13:11:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:25 np0005605476 nova_compute[239846]: 2026-02-02 18:11:25.953 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:30 np0005605476 nova_compute[239846]: 2026-02-02 18:11:30.954 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:35 np0005605476 nova_compute[239846]: 2026-02-02 18:11:35.955 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:35 np0005605476 nova_compute[239846]: 2026-02-02 18:11:35.957 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:11:36
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Feb  2 13:11:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:11:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:11:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:40 np0005605476 nova_compute[239846]: 2026-02-02 18:11:40.957 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:44 np0005605476 nova_compute[239846]: 2026-02-02 18:11:44.243 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:45 np0005605476 nova_compute[239846]: 2026-02-02 18:11:45.959 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:11:46.664 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:11:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:11:46.665 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:11:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:11:46.665 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:11:47 np0005605476 podman[287047]: 2026-02-02 18:11:47.60391783 +0000 UTC m=+0.049135676 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:11:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:11:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:50 np0005605476 podman[287064]: 2026-02-02 18:11:50.681033954 +0000 UTC m=+0.134302764 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb  2 13:11:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:50 np0005605476 nova_compute[239846]: 2026-02-02 18:11:50.960 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:11:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.280 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.280 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.280 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.281 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.281 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:11:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:11:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803808020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.803 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.954 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.956 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4220MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.956 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:11:53 np0005605476 nova_compute[239846]: 2026-02-02 18:11:53.956 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:11:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.091 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.091 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.108 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:11:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:11:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2977318921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.674 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.697 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.801 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.803 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:11:54 np0005605476 nova_compute[239846]: 2026-02-02 18:11:54.803 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:11:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:11:55 np0005605476 nova_compute[239846]: 2026-02-02 18:11:55.962 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:11:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:56 np0005605476 nova_compute[239846]: 2026-02-02 18:11:56.799 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:56 np0005605476 nova_compute[239846]: 2026-02-02 18:11:56.800 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:11:59 np0005605476 nova_compute[239846]: 2026-02-02 18:11:59.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:11:59 np0005605476 nova_compute[239846]: 2026-02-02 18:11:59.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 13:11:59 np0005605476 nova_compute[239846]: 2026-02-02 18:11:59.242 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 13:11:59 np0005605476 nova_compute[239846]: 2026-02-02 18:11:59.382 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 13:11:59 np0005605476 nova_compute[239846]: 2026-02-02 18:11:59.382 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:00 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:00 np0005605476 nova_compute[239846]: 2026-02-02 18:12:00.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:00 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:00 np0005605476 nova_compute[239846]: 2026-02-02 18:12:00.964 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:02 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:03 np0005605476 nova_compute[239846]: 2026-02-02 18:12:03.237 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:03 np0005605476 nova_compute[239846]: 2026-02-02 18:12:03.300 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:03 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 13:12:04 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.234254119 +0000 UTC m=+0.037945320 container create 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 13:12:04 np0005605476 nova_compute[239846]: 2026-02-02 18:12:04.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:04 np0005605476 nova_compute[239846]: 2026-02-02 18:12:04.241 239853 DEBUG nova.compute.manager [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 13:12:04 np0005605476 systemd[1]: Started libpod-conmon-4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756.scope.
Feb  2 13:12:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.216382865 +0000 UTC m=+0.020074076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.317434402 +0000 UTC m=+0.121125613 container init 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.322887886 +0000 UTC m=+0.126579077 container start 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.326449976 +0000 UTC m=+0.130141167 container attach 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:12:04 np0005605476 goofy_noyce[287292]: 167 167
Feb  2 13:12:04 np0005605476 systemd[1]: libpod-4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756.scope: Deactivated successfully.
Feb  2 13:12:04 np0005605476 conmon[287292]: conmon 4007f1e79eff58032277 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756.scope/container/memory.events
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.329881083 +0000 UTC m=+0.133572274 container died 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:12:04 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d0fa0b85b95a8821a5ab804a3a95447ea533d068f599199888db9acd0db67a50-merged.mount: Deactivated successfully.
Feb  2 13:12:04 np0005605476 podman[287276]: 2026-02-02 18:12:04.368393287 +0000 UTC m=+0.172084478 container remove 4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 13:12:04 np0005605476 systemd[1]: libpod-conmon-4007f1e79eff58032277b159eb45dff1944f355c89d68dc6af8fca8dfe502756.scope: Deactivated successfully.
Feb  2 13:12:04 np0005605476 podman[287315]: 2026-02-02 18:12:04.503122463 +0000 UTC m=+0.044439293 container create 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 13:12:04 np0005605476 systemd[1]: Started libpod-conmon-3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69.scope.
Feb  2 13:12:04 np0005605476 podman[287315]: 2026-02-02 18:12:04.481528545 +0000 UTC m=+0.022845415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:04 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:04 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:04 np0005605476 podman[287315]: 2026-02-02 18:12:04.627353543 +0000 UTC m=+0.168670453 container init 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 13:12:04 np0005605476 podman[287315]: 2026-02-02 18:12:04.634603587 +0000 UTC m=+0.175920427 container start 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 13:12:04 np0005605476 podman[287315]: 2026-02-02 18:12:04.638351582 +0000 UTC m=+0.179668402 container attach 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:12:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 13:12:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598497113' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 13:12:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 13:12:05 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598497113' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 13:12:05 np0005605476 kind_chatelet[287332]: --> passed data devices: 0 physical, 3 LVM
Feb  2 13:12:05 np0005605476 kind_chatelet[287332]: --> All data devices are unavailable
Feb  2 13:12:05 np0005605476 systemd[1]: libpod-3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69.scope: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287315]: 2026-02-02 18:12:05.104888675 +0000 UTC m=+0.646205555 container died 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 13:12:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-de82154fbb399323b0fb47465f6c21f160c799762bf10d638b2ad9db116c3f18-merged.mount: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287315]: 2026-02-02 18:12:05.150644444 +0000 UTC m=+0.691961304 container remove 3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_chatelet, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 13:12:05 np0005605476 systemd[1]: libpod-conmon-3b512eed7eba31cc6a2e9d8bc1199e581e3696b1c10ee179bde3225d5ba84e69.scope: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.587043118 +0000 UTC m=+0.040797450 container create 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 13:12:05 np0005605476 systemd[1]: Started libpod-conmon-757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28.scope.
Feb  2 13:12:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.638413275 +0000 UTC m=+0.092167617 container init 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.643602041 +0000 UTC m=+0.097356363 container start 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.646255026 +0000 UTC m=+0.100009348 container attach 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 13:12:05 np0005605476 hopeful_merkle[287443]: 167 167
Feb  2 13:12:05 np0005605476 systemd[1]: libpod-757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28.scope: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.647708977 +0000 UTC m=+0.101463299 container died 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 13:12:05 np0005605476 systemd[1]: var-lib-containers-storage-overlay-d3a3b1d411bf69d08b709ae0c13c078e142609bebc4bb7d70763ff8e2d35fcb2-merged.mount: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.569625997 +0000 UTC m=+0.023380339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:05 np0005605476 podman[287427]: 2026-02-02 18:12:05.678670659 +0000 UTC m=+0.132424991 container remove 757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 13:12:05 np0005605476 systemd[1]: libpod-conmon-757babd3d9da7c9381eae746330bf58495d2379dbde004ab1d0445ad13c98b28.scope: Deactivated successfully.
Feb  2 13:12:05 np0005605476 podman[287467]: 2026-02-02 18:12:05.821782331 +0000 UTC m=+0.045336338 container create ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:12:05 np0005605476 systemd[1]: Started libpod-conmon-ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105.scope.
Feb  2 13:12:05 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7daa687cdcc695dc9fb4d73903999442f2aba47254a03578cf2c65c055791ed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7daa687cdcc695dc9fb4d73903999442f2aba47254a03578cf2c65c055791ed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7daa687cdcc695dc9fb4d73903999442f2aba47254a03578cf2c65c055791ed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:05 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7daa687cdcc695dc9fb4d73903999442f2aba47254a03578cf2c65c055791ed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:05 np0005605476 podman[287467]: 2026-02-02 18:12:05.803228028 +0000 UTC m=+0.026782055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:05 np0005605476 podman[287467]: 2026-02-02 18:12:05.905962832 +0000 UTC m=+0.129516849 container init ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 13:12:05 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:05 np0005605476 podman[287467]: 2026-02-02 18:12:05.912428454 +0000 UTC m=+0.135982441 container start ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 13:12:05 np0005605476 podman[287467]: 2026-02-02 18:12:05.916446078 +0000 UTC m=+0.140000075 container attach ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 13:12:05 np0005605476 nova_compute[239846]: 2026-02-02 18:12:05.966 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:12:06 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]: {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    "0": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "devices": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "/dev/loop3"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            ],
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_name": "ceph_lv0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_size": "21470642176",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=eaf642f2-cfb0-43d5-aab5-31b940552369,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "name": "ceph_lv0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "tags": {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_uuid": "0AKnlF-kT48-ijts-LrH2-qOCQ-1SQV-tK47Q1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_name": "ceph",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.crush_device_class": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.encrypted": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.objectstore": "bluestore",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_fsid": "eaf642f2-cfb0-43d5-aab5-31b940552369",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_id": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.vdo": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.with_tpm": "0"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            },
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "vg_name": "ceph_vg0"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        }
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    ],
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    "1": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "devices": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "/dev/loop4"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            ],
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_name": "ceph_lv1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_size": "21470642176",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=694d1bf9-7846-44e5-9a03-71f88deec6dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "name": "ceph_lv1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "tags": {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_uuid": "CqaHsN-Lsyk-Vn0N-Zzrw-Wpbm-Gaao-HPVA2s",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_name": "ceph",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.crush_device_class": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.encrypted": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.objectstore": "bluestore",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_fsid": "694d1bf9-7846-44e5-9a03-71f88deec6dd",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_id": "1",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.vdo": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.with_tpm": "0"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            },
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "vg_name": "ceph_vg1"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        }
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    ],
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    "2": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "devices": [
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "/dev/loop5"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            ],
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_name": "ceph_lv2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_size": "21470642176",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb48d0ef-3496-563c-b73d-661fb962013e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "lv_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "name": "ceph_lv2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "tags": {
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.block_uuid": "5McfYl-zwU7-cLGO-Uofu-9c3S-lR35-beE1HP",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cephx_lockbox_secret": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_fsid": "eb48d0ef-3496-563c-b73d-661fb962013e",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.cluster_name": "ceph",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.crush_device_class": "",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.encrypted": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.objectstore": "bluestore",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_fsid": "ddcbb29d-f4c3-4477-a0bc-a3d522b41ce5",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osd_id": "2",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.vdo": "0",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:                "ceph.with_tpm": "0"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            },
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "type": "block",
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:            "vg_name": "ceph_vg2"
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:        }
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]:    ]
Feb  2 13:12:06 np0005605476 inspiring_meitner[287484]: }
Feb  2 13:12:06 np0005605476 systemd[1]: libpod-ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105.scope: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287467]: 2026-02-02 18:12:06.234475837 +0000 UTC m=+0.458029824 container died ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:12:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-7daa687cdcc695dc9fb4d73903999442f2aba47254a03578cf2c65c055791ed2-merged.mount: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287467]: 2026-02-02 18:12:06.276172251 +0000 UTC m=+0.499726238 container remove ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:12:06 np0005605476 systemd[1]: libpod-conmon-ceaa8d4454634720b8795d4a92968248970be4456a3eb491a189c25875f10105.scope: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.727674391 +0000 UTC m=+0.049265899 container create 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 13:12:06 np0005605476 systemd[1]: Started libpod-conmon-0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa.scope.
Feb  2 13:12:06 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.711935697 +0000 UTC m=+0.033527225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.809750753 +0000 UTC m=+0.131342291 container init 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.815588727 +0000 UTC m=+0.137180235 container start 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.820012992 +0000 UTC m=+0.141604610 container attach 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 13:12:06 np0005605476 silly_hugle[287582]: 167 167
Feb  2 13:12:06 np0005605476 systemd[1]: libpod-0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa.scope: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.822663287 +0000 UTC m=+0.144254805 container died 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 13:12:06 np0005605476 systemd[1]: var-lib-containers-storage-overlay-e1b4bf586c54b65e6d83028d86582a73e52813c972b31d75b4d28de875d21d91-merged.mount: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287565]: 2026-02-02 18:12:06.859658149 +0000 UTC m=+0.181249657 container remove 0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hugle, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:12:06 np0005605476 systemd[1]: libpod-conmon-0ce00a5b154e7ea5c24277138f04142e73092f803e851e0e75d4fc87ee828cfa.scope: Deactivated successfully.
Feb  2 13:12:06 np0005605476 podman[287606]: 2026-02-02 18:12:06.988275691 +0000 UTC m=+0.038707211 container create b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 13:12:07 np0005605476 systemd[1]: Started libpod-conmon-b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f.scope.
Feb  2 13:12:07 np0005605476 systemd[1]: Started libcrun container.
Feb  2 13:12:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1228a56af7dbfe70182d8ae4a13c174fec1b7969c0ec284f97b3c203512a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1228a56af7dbfe70182d8ae4a13c174fec1b7969c0ec284f97b3c203512a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1228a56af7dbfe70182d8ae4a13c174fec1b7969c0ec284f97b3c203512a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:07 np0005605476 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c1228a56af7dbfe70182d8ae4a13c174fec1b7969c0ec284f97b3c203512a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 13:12:07 np0005605476 podman[287606]: 2026-02-02 18:12:07.065492966 +0000 UTC m=+0.115924496 container init b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:12:07 np0005605476 podman[287606]: 2026-02-02 18:12:06.973499055 +0000 UTC m=+0.023930585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 13:12:07 np0005605476 podman[287606]: 2026-02-02 18:12:07.073258495 +0000 UTC m=+0.123690005 container start b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 13:12:07 np0005605476 podman[287606]: 2026-02-02 18:12:07.077025821 +0000 UTC m=+0.127457331 container attach b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:07 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:07 np0005605476 lvm[287700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:12:07 np0005605476 lvm[287703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:12:07 np0005605476 lvm[287700]: VG ceph_vg0 finished
Feb  2 13:12:07 np0005605476 lvm[287703]: VG ceph_vg1 finished
Feb  2 13:12:07 np0005605476 lvm[287705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:12:07 np0005605476 lvm[287705]: VG ceph_vg2 finished
Feb  2 13:12:07 np0005605476 keen_wilson[287622]: {}
Feb  2 13:12:07 np0005605476 systemd[1]: libpod-b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f.scope: Deactivated successfully.
Feb  2 13:12:07 np0005605476 systemd[1]: libpod-b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f.scope: Consumed 1.081s CPU time.
Feb  2 13:12:07 np0005605476 podman[287708]: 2026-02-02 18:12:07.856250963 +0000 UTC m=+0.024663216 container died b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 13:12:07 np0005605476 systemd[1]: var-lib-containers-storage-overlay-33c1228a56af7dbfe70182d8ae4a13c174fec1b7969c0ec284f97b3c203512a7-merged.mount: Deactivated successfully.
Feb  2 13:12:07 np0005605476 podman[287708]: 2026-02-02 18:12:07.955410676 +0000 UTC m=+0.123822909 container remove b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wilson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 13:12:07 np0005605476 systemd[1]: libpod-conmon-b290ca9ee2f3dd872801e4f1688ad89b33d467132065c3ebe90623bab49f868f.scope: Deactivated successfully.
Feb  2 13:12:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 13:12:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:08 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 13:12:08 np0005605476 ceph-mon[75197]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:08 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:09 np0005605476 ceph-mon[75197]: from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' 
Feb  2 13:12:10 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:10 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:10 np0005605476 nova_compute[239846]: 2026-02-02 18:12:10.970 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:12:12 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:14 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:15 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:15 np0005605476 nova_compute[239846]: 2026-02-02 18:12:15.973 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:15 np0005605476 nova_compute[239846]: 2026-02-02 18:12:15.975 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:16 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.121020) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936121108, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 736, "num_deletes": 251, "total_data_size": 944906, "memory_usage": 959688, "flush_reason": "Manual Compaction"}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936126961, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 936378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40712, "largest_seqno": 41447, "table_properties": {"data_size": 932487, "index_size": 1671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8572, "raw_average_key_size": 19, "raw_value_size": 924763, "raw_average_value_size": 2096, "num_data_blocks": 74, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770055876, "oldest_key_time": 1770055876, "file_creation_time": 1770055936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 6042 microseconds, and 2605 cpu microseconds.
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.127040) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 936378 bytes OK
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.127103) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.128345) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.128358) EVENT_LOG_v1 {"time_micros": 1770055936128354, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.128376) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 941135, prev total WAL file size 941135, number of live WAL files 2.
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.128924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(914KB)], [86(10MB)]
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936128995, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11800885, "oldest_snapshot_seqno": -1}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7117 keys, 10129730 bytes, temperature: kUnknown
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936168811, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10129730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10079200, "index_size": 31602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 181542, "raw_average_key_size": 25, "raw_value_size": 9948506, "raw_average_value_size": 1397, "num_data_blocks": 1246, "num_entries": 7117, "num_filter_entries": 7117, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770052816, "oldest_key_time": 0, "file_creation_time": 1770055936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25cd6f31-be6a-4568-affa-77d2d10d4958", "db_session_id": "YVWEYR8NAABFSRFBSKLQ", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.169048) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10129730 bytes
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.170158) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 295.9 rd, 254.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(23.4) write-amplify(10.8) OK, records in: 7631, records dropped: 514 output_compression: NoCompression
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.170175) EVENT_LOG_v1 {"time_micros": 1770055936170167, "job": 50, "event": "compaction_finished", "compaction_time_micros": 39888, "compaction_time_cpu_micros": 18519, "output_level": 6, "num_output_files": 1, "total_output_size": 10129730, "num_input_records": 7631, "num_output_records": 7117, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936170490, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770055936171419, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.128798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.171517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.171523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.171525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.171527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:16 np0005605476 ceph-mon[75197]: rocksdb: (Original Log Time 2026/02/02-18:12:16.171530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 13:12:18 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Feb  2 13:12:18 np0005605476 podman[287748]: 2026-02-02 18:12:18.624199294 +0000 UTC m=+0.062721307 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 13:12:20 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:12:20 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:20 np0005605476 nova_compute[239846]: 2026-02-02 18:12:20.976 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:21 np0005605476 podman[287768]: 2026-02-02 18:12:21.632811578 +0000 UTC m=+0.079991754 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 13:12:22 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:12:24 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:12:25 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:25 np0005605476 nova_compute[239846]: 2026-02-02 18:12:25.976 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:25 np0005605476 nova_compute[239846]: 2026-02-02 18:12:25.979 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:26 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 13:12:28 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Feb  2 13:12:30 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Feb  2 13:12:30 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:30 np0005605476 nova_compute[239846]: 2026-02-02 18:12:30.978 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:32 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:34 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:35 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:35 np0005605476 nova_compute[239846]: 2026-02-02 18:12:35.980 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Optimize plan auto_2026-02-02_18:12:36
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] do_upmap
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'volumes']
Feb  2 13:12:36 np0005605476 ceph-mgr[75493]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:12:37 np0005605476 ceph-mgr[75493]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 13:12:38 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:40 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:40 np0005605476 systemd-logind[799]: New session 54 of user zuul.
Feb  2 13:12:40 np0005605476 systemd[1]: Started Session 54 of User zuul.
Feb  2 13:12:40 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:40 np0005605476 nova_compute[239846]: 2026-02-02 18:12:40.982 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:12:40 np0005605476 nova_compute[239846]: 2026-02-02 18:12:40.984 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:12:40 np0005605476 nova_compute[239846]: 2026-02-02 18:12:40.984 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 13:12:40 np0005605476 nova_compute[239846]: 2026-02-02 18:12:40.984 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:12:41 np0005605476 nova_compute[239846]: 2026-02-02 18:12:41.036 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:41 np0005605476 nova_compute[239846]: 2026-02-02 18:12:41.037 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 13:12:42 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:43 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19398 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:43 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19400 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:44 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:44 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 13:12:44 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367437055' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  2 13:12:45 np0005605476 nova_compute[239846]: 2026-02-02 18:12:45.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:45 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:46 np0005605476 nova_compute[239846]: 2026-02-02 18:12:46.038 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 13:12:46 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:12:46.666 155391 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:12:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:12:46.666 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:12:46 np0005605476 ovn_metadata_agent[155386]: 2026-02-02 18:12:46.666 155391 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:12:46 np0005605476 ovs-vsctl[288079]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  2 13:12:47 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  2 13:12:47 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.560402237935062e-06 of space, bias 1.0, pg target 0.0025681206713805186 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029149192412119868 of space, bias 1.0, pg target 0.8744757723635961 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2578116792824274e-06 of space, bias 1.0, pg target 0.0006773435037847282 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664319220655496 of space, bias 1.0, pg target 0.19992957661966487 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.678021347941137e-07 of space, bias 4.0, pg target 0.0011613625617529365 quantized to 16 (current 16)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 13:12:47 np0005605476 ceph-mgr[75493]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 13:12:47 np0005605476 virtqemud[239321]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 13:12:48 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:48 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: cache status {prefix=cache status} (starting...)
Feb  2 13:12:48 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: client ls {prefix=client ls} (starting...)
Feb  2 13:12:48 np0005605476 lvm[288405]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 13:12:48 np0005605476 lvm[288405]: VG ceph_vg0 finished
Feb  2 13:12:48 np0005605476 lvm[288414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 13:12:48 np0005605476 lvm[288414]: VG ceph_vg2 finished
Feb  2 13:12:48 np0005605476 lvm[288442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 13:12:48 np0005605476 lvm[288442]: VG ceph_vg1 finished
Feb  2 13:12:48 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19404 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: damage ls {prefix=damage ls} (starting...)
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump loads {prefix=dump loads} (starting...)
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  2 13:12:49 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19406 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  2 13:12:49 np0005605476 podman[288582]: 2026-02-02 18:12:49.618823707 +0000 UTC m=+0.060032432 container health_status 983aad36fbefc6eb42f7b2455e6339d70a90e29e2ce721c2d9ecdd2cd91b9e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb  2 13:12:49 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 13:12:49 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3103131772' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  2 13:12:49 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19410 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:49 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  2 13:12:50 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833842856' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 13:12:50 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  2 13:12:50 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19414 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:50 np0005605476 ceph-mgr[75493]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:12:50 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: 2026-02-02T18:12:50.343+0000 7f7c633f1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 13:12:50 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: ops {prefix=ops} (starting...)
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595878966' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  2 13:12:50 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506518512' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb  2 13:12:50 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: session ls {prefix=session ls} (starting...)
Feb  2 13:12:51 np0005605476 nova_compute[239846]: 2026-02-02 18:12:51.040 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:51 np0005605476 ceph-mds[95614]: mds.cephfs.compute-0.vvdoei asok_command: status {prefix=status} (starting...)
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911195537' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941484139' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 13:12:51 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19424 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 13:12:51 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469188553' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 13:12:52 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19428 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:52 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 13:12:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693083493' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 13:12:52 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 13:12:52 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76396354' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb  2 13:12:52 np0005605476 podman[288914]: 2026-02-02 18:12:52.667265744 +0000 UTC m=+0.114901548 container health_status 70e0d83cb45fbe649a29519f5074ad11df900a5702b7e7e666708ce90ca8d783 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fb44d116753823076754339ecdff5d26c5c02250617a2157b9bf22160a92362b-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec-b679b18dd4e53db9e352e8eb6b265beb4b106035d3e3bfb3cb99fdf41954fcec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020040995' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866945028' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 13:12:53 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089660430' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  2 13:12:53 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19440 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:53 np0005605476 ceph-mgr[75493]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 13:12:53 np0005605476 ceph-eb48d0ef-3496-563c-b73d-661fb962013e-mgr-compute-0-hccdnu[75489]: 2026-02-02T18:12:53.763+0000 7f7c633f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 13:12:54 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.242 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.282 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.282 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.282 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.283 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.283 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067024798' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765902086' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb  2 13:12:54 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19448 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44800 session 0x561087b40a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108b9ac800 session 0x56108bcfd500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561088c91340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121667584 unmapped: 50479104 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896d4800 session 0x561087b41c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108b9acc00 session 0x561086ecee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44000 session 0x561086d55dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1736912 data_alloc: 234881024 data_used: 14718752
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 50675712 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 50675712 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561087b40fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44800 session 0x56108944ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab44c00 session 0x56108947e1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 50659328 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f640f000/0x0/0x4ffc00000, data 0x314f617/0x327a000, compress 0x0/0x0/0x0, omap 0x25cc9, meta 0x3d4a337), peers [0,1] op hist [0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab45000 session 0x561088c90e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f8c14000/0x0/0x4ffc00000, data 0x314f5a5/0x3278000, compress 0x0/0x0/0x0, omap 0x25d91, meta 0x3d4a26f), peers [0,1] op hist [0,0,0,0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 50634752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x56108ab45400 session 0x561088c91dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 ms_handle_reset con 0x5610896c7800 session 0x561086cde380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 50585600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f8572000/0x0/0x4ffc00000, data 0x37ef617/0x391a000, compress 0x0/0x0/0x0, omap 0x25d91, meta 0x3d4a26f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1774689 data_alloc: 234881024 data_used: 14722766
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 196 handle_osd_map epochs [196,197], i have 197, src has [1,197]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.673086166s of 10.296282768s, submitted: 203
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 50536448 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab44800 session 0x5610897ece00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 47751168 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab44c00 session 0x56108b73b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 197 ms_handle_reset con 0x56108ab45800 session 0x561087bacc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 50896896 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 198 ms_handle_reset con 0x56108ab45c00 session 0x561088f461c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 48922624 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 50372608 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1857643 data_alloc: 234881024 data_used: 14722766
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896d4400 session 0x5610897edc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121774080 unmapped: 50372608 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab45000 session 0x56108947f500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f79de000/0x0/0x4ffc00000, data 0x437b7ce/0x44aa000, compress 0x0/0x0/0x0, omap 0x26796, meta 0x3d4986a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121692160 unmapped: 50454528 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896c7800 session 0x561088da3180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 50446336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121700352 unmapped: 50446336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab45800 session 0x561087b15880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x56108ab44c00 session 0x561088dff180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 ms_handle_reset con 0x5610896c7800 session 0x561089492700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 50814976 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1875133 data_alloc: 234881024 data_used: 14723351
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x5610896d4400 session 0x561087bac8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45000 session 0x561086dfae00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.727210999s of 10.117328644s, submitted: 81
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45800 session 0x561088da3a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 50806784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44800 session 0x561086d55500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f77ce000/0x0/0x4ffc00000, data 0x458e7dd/0x46be000, compress 0x0/0x0/0x0, omap 0x26796, meta 0x3d4986a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45000 session 0x561086dfb180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab45800 session 0x561087b38e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x5610896d4800 session 0x561087b408c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44400 session 0x561088d4fdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44000 session 0x561088f47880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 50020352 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 ms_handle_reset con 0x56108ab44000 session 0x56108b8c6e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f6fd8000/0x0/0x4ffc00000, data 0x4d83379/0x4eb4000, compress 0x0/0x0/0x0, omap 0x26c41, meta 0x3d493bf), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 200 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 201 ms_handle_reset con 0x5610896d4800 session 0x561088dfe700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 46219264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125927424 unmapped: 46219264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 46186496 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 202 ms_handle_reset con 0x56108ab44400 session 0x561087b396c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914581 data_alloc: 234881024 data_used: 22064581
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 46178304 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 202 ms_handle_reset con 0x56108ab45000 session 0x56108947fdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 126328832 unmapped: 45817856 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x435eb66/0x4491000, compress 0x0/0x0/0x0, omap 0x272c6, meta 0x3d48d3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 203 ms_handle_reset con 0x56108ab45800 session 0x561088dfe000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 203 ms_handle_reset con 0x56108ab45800 session 0x56108944c000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 43417600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 203 handle_osd_map epochs [203,204], i have 204, src has [1,204]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1937046 data_alloc: 251658240 data_used: 27083091
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 38772736 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.480896950s of 10.668888092s, submitted: 93
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 204 ms_handle_reset con 0x5610896d4800 session 0x561087bac1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 204 heartbeat osd_stat(store_statfs(0x4f8095000/0x0/0x4ffc00000, data 0x3cc21b7/0x3df5000, compress 0x0/0x0/0x0, omap 0x278a2, meta 0x3d4875e), peers [0,1] op hist [0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab45000 session 0x561086e9a1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 38330368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab44400 session 0x561088f47500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 205 ms_handle_reset con 0x56108ab44000 session 0x561088f47dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 38068224 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 38068224 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 38035456 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x56108ab44400 session 0x561086cbb6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x5610896d4800 session 0x561087b9cfc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 206 ms_handle_reset con 0x56108ab45800 session 0x561088e20c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1980818 data_alloc: 251658240 data_used: 27115859
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 207 ms_handle_reset con 0x56108bd30400 session 0x561086e9a380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 207 ms_handle_reset con 0x56108ab45000 session 0x56108b8c6540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134307840 unmapped: 37838848 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x407b9d1/0x3eb3000, compress 0x0/0x0/0x0, omap 0x28121, meta 0x3d47edf), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134307840 unmapped: 37838848 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 208 ms_handle_reset con 0x5610896d4800 session 0x561088d4efc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f7c59000/0x0/0x4ffc00000, data 0x43f25dd/0x422b000, compress 0x0/0x0/0x0, omap 0x2846d, meta 0x3d47b93), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137584640 unmapped: 34562048 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 34430976 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f7c54000/0x0/0x4ffc00000, data 0x43f41cd/0x422e000, compress 0x0/0x0/0x0, omap 0x285ae, meta 0x3d47a52), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 35962880 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x5610896c7800 session 0x561088d01dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x5610896d4400 session 0x56108944c700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x56108ab44400 session 0x561088da3c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2021686 data_alloc: 251658240 data_used: 28264449
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 209 ms_handle_reset con 0x56108ab45800 session 0x56108944d500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 35921920 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.097351074s of 10.501673698s, submitted: 128
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136224768 unmapped: 35921920 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 210 ms_handle_reset con 0x56108ab45800 session 0x56108b8c7dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 35913728 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 35913728 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f7c4a000/0x0/0x4ffc00000, data 0x4406911/0x4240000, compress 0x0/0x0/0x0, omap 0x28ef1, meta 0x3d4710f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896d4800 session 0x561088d00a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896c7800 session 0x561088da2c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 35897344 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x5610896d4400 session 0x5610897eda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108bd31800 session 0x561088d001c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108ab44400 session 0x561088d4f500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 ms_handle_reset con 0x56108bd30400 session 0x561088d4f6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f8cb9000/0x0/0x4ffc00000, data 0x3096003/0x31cd000, compress 0x0/0x0/0x0, omap 0x29593, meta 0x3d46a6d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1864928 data_alloc: 234881024 data_used: 20889914
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 213 ms_handle_reset con 0x5610896c7800 session 0x561086cbb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 213 handle_osd_map epochs [213,214], i have 214, src has [1,214]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1876702 data_alloc: 234881024 data_used: 20894894
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f8cb4000/0x0/0x4ffc00000, data 0x3099748/0x31d4000, compress 0x0/0x0/0x0, omap 0x29a64, meta 0x3d4659c), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 38674432 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4400 session 0x56108947e8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 38715392 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.027520180s of 10.344891548s, submitted: 135
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4800 session 0x5610897ec380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4800 session 0x561087bac000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896c7800 session 0x56108b73ac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x5610896d4400 session 0x56108b73afc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 ms_handle_reset con 0x56108ab44400 session 0x561086d49a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 37519360 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x56108bd30400 session 0x56108944dc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134635520 unmapped: 37511168 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x56108bd30400 session 0x56108bcfc000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 ms_handle_reset con 0x5610896c7800 session 0x561086cde8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f84d6000/0x0/0x4ffc00000, data 0x3871a6b/0x39b2000, compress 0x0/0x0/0x0, omap 0x2a4f0, meta 0x3d45b10), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 216 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 37494784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938593 data_alloc: 234881024 data_used: 20896336
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 218 ms_handle_reset con 0x5610896d4400 session 0x5610897ec000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 37453824 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 218 ms_handle_reset con 0x5610896d4800 session 0x561086d55a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 219 ms_handle_reset con 0x56108ab44400 session 0x561088dfec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 37453824 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 219 ms_handle_reset con 0x5610896c7800 session 0x56108b73a000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f84cd000/0x0/0x4ffc00000, data 0x3876c34/0x39b8000, compress 0x0/0x0/0x0, omap 0x2acf8, meta 0x3d45308), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 219 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x5610896d4400 session 0x5610897ed880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x3878824/0x39bb000, compress 0x0/0x0/0x0, omap 0x2afea, meta 0x3d45016), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 37675008 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9acc00 session 0x56108bcfd880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9ac800 session 0x56108944c1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108ab45800 session 0x56108944c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x3878824/0x39bb000, compress 0x0/0x0/0x0, omap 0x2afea, meta 0x3d45016), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1872001 data_alloc: 234881024 data_used: 19780688
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2cd7824/0x2e1a000, compress 0x0/0x0/0x0, omap 0x2b0d2, meta 0x3d44f2e), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x5610896c7800 session 0x561088d4f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.550335884s of 10.746232986s, submitted: 100
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 41410560 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 ms_handle_reset con 0x56108b9ac800 session 0x56108944c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9070000/0x0/0x4ffc00000, data 0x2cd7824/0x2e1a000, compress 0x0/0x0/0x0, omap 0x2b2e5, meta 0x3d44d1b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 221 ms_handle_reset con 0x56108b9acc00 session 0x561088e21340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 41402368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x561089a46400 session 0x561088c90e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x5610899ee400 session 0x561086e9ac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 222 ms_handle_reset con 0x5610896d4400 session 0x561087bace00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 41394176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1866350 data_alloc: 234881024 data_used: 19780688
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 41394176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x561089a46400 session 0x561088d4fa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f93aa000/0x0/0x4ffc00000, data 0x2994a13/0x2adb000, compress 0x0/0x0/0x0, omap 0x2bc0a, meta 0x3d443f6), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x5610896c7800 session 0x561087bac380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 40337408 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f93aa000/0x0/0x4ffc00000, data 0x2994a13/0x2adb000, compress 0x0/0x0/0x0, omap 0x2bc0a, meta 0x3d443f6), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 ms_handle_reset con 0x56108b9ac800 session 0x561086cbbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 225 ms_handle_reset con 0x56108b9acc00 session 0x561086dfa8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 40304640 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 225 ms_handle_reset con 0x5610896c7800 session 0x56108944da40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134258688 unmapped: 37888000 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 226 ms_handle_reset con 0x5610896d4400 session 0x5610897eca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 226 ms_handle_reset con 0x56108b9ac800 session 0x561089492a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925160 data_alloc: 234881024 data_used: 20200335
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 227 ms_handle_reset con 0x5610899eec00 session 0x561086e9bc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134840320 unmapped: 37306368 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x5610899ef000 session 0x561086cbba40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x5610896c7800 session 0x561086ecfc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 228 ms_handle_reset con 0x561089a46400 session 0x561088e216c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f8de9000/0x0/0x4ffc00000, data 0x2f38b2c/0x3086000, compress 0x0/0x0/0x0, omap 0x2cd98, meta 0x3d43268), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.801876068s of 10.115190506s, submitted: 160
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 37298176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 229 ms_handle_reset con 0x5610896d4400 session 0x561087bad340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 229 ms_handle_reset con 0x5610899eec00 session 0x561087b141c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 37322752 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1931807 data_alloc: 234881024 data_used: 20200335
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x56108b9ac800 session 0x56108947ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610896c7800 session 0x56108b73bc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610896d4400 session 0x56108b8c61c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610899eec00 session 0x561088d4f180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x561089a46400 session 0x56108b8c68c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 36659200 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 ms_handle_reset con 0x5610899ef400 session 0x5610897eddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f873d000/0x0/0x4ffc00000, data 0x35fbe31/0x374d000, compress 0x0/0x0/0x0, omap 0x2d560, meta 0x3d42aa0), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 231 ms_handle_reset con 0x5610896d4400 session 0x561087bac540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610896c7800 session 0x5610897ec1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899eec00 session 0x56108944d180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f8733000/0x0/0x4ffc00000, data 0x35ff5a1/0x3753000, compress 0x0/0x0/0x0, omap 0x2dbe6, meta 0x3d4241a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899ef400 session 0x561086ecea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 36528128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x561089a46400 session 0x561088f46380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 ms_handle_reset con 0x5610899ef800 session 0x561087b9c000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896c7800 session 0x56108b73a540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896d4400 session 0x561088c91340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1983852 data_alloc: 234881024 data_used: 20200335
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899ef400 session 0x561088d01880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610896c8000 session 0x561086cdfa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 35987456 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899eec00 session 0x56108b73b180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 ms_handle_reset con 0x5610899efc00 session 0x56108944cc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x362c198/0x3782000, compress 0x0/0x0/0x0, omap 0x2df47, meta 0x3d420b9), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136167424 unmapped: 35979264 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x362c198/0x3782000, compress 0x0/0x0/0x0, omap 0x2df47, meta 0x3d420b9), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 33669120 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 234 ms_handle_reset con 0x5610899ef800 session 0x561087bada40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 234 ms_handle_reset con 0x5610899ef400 session 0x5610894921c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f8705000/0x0/0x4ffc00000, data 0x362dd34/0x3785000, compress 0x0/0x0/0x0, omap 0x2e251, meta 0x3d41daf), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 33669120 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.314812660s of 11.476747513s, submitted: 89
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 235 ms_handle_reset con 0x561089a46400 session 0x561087b9cc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 235 ms_handle_reset con 0x5610899ef400 session 0x561087bad180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 33398784 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x5610899eec00 session 0x56108944d340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x56108b9ac800 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 236 ms_handle_reset con 0x5610899ef800 session 0x561086d55340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2034221 data_alloc: 234881024 data_used: 25958031
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 33382400 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 237 ms_handle_reset con 0x561089705c00 session 0x56108b73ba40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 237 ms_handle_reset con 0x5610899efc00 session 0x56108b8c7880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 33382400 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138813440 unmapped: 33333248 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 238 ms_handle_reset con 0x5610899ef400 session 0x561087b41880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f86f7000/0x0/0x4ffc00000, data 0x3635167/0x3793000, compress 0x0/0x0/0x0, omap 0x2eccd, meta 0x3d41333), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138813440 unmapped: 33333248 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 238 ms_handle_reset con 0x5610899ef800 session 0x561089493dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x56108b9ac800 session 0x561088c91a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x5610899eec00 session 0x561088d00540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 239 ms_handle_reset con 0x5610899ef400 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138870784 unmapped: 33275904 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 240 ms_handle_reset con 0x5610899ef800 session 0x561088e21180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f86f1000/0x0/0x4ffc00000, data 0x3637210/0x3797000, compress 0x0/0x0/0x0, omap 0x2ee1f, meta 0x3d411e1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2047515 data_alloc: 234881024 data_used: 25958616
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x5610899efc00 session 0x561087b9c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138887168 unmapped: 33259520 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x56108b9ac800 session 0x561088d4e700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 241 ms_handle_reset con 0x561089705800 session 0x561088d4e8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899ef800 session 0x561087bac700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899ef400 session 0x561086e9b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 142696448 unmapped: 29450240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 ms_handle_reset con 0x5610899efc00 session 0x56108b73b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8151000/0x0/0x4ffc00000, data 0x3bccbf6/0x3d2f000, compress 0x0/0x0/0x0, omap 0x2f761, meta 0x3d4089f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 29032448 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8154000/0x0/0x4ffc00000, data 0x3bd5bf6/0x3d38000, compress 0x0/0x0/0x0, omap 0x2f761, meta 0x3d4089f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 30556160 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 243 ms_handle_reset con 0x561088793400 session 0x56108947f340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 243 ms_handle_reset con 0x56108b9ac800 session 0x561088da2fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.641407013s of 10.993301392s, submitted: 175
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2103713 data_alloc: 234881024 data_used: 26803489
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x561088793400 session 0x561089493500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x5610896d4800 session 0x561088e201c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 244 ms_handle_reset con 0x56108bd30400 session 0x56108bcfd180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 30547968 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 245 ms_handle_reset con 0x5610899ef400 session 0x56108bcfcfc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f8eb7000/0x0/0x4ffc00000, data 0x2e69c66/0x2fd1000, compress 0x0/0x0/0x0, omap 0x303e7, meta 0x3d3fc19), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 247 handle_osd_map epochs [247,248], i have 248, src has [1,248]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 248 ms_handle_reset con 0x5610899ef800 session 0x561086ecfa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1982022 data_alloc: 234881024 data_used: 18270598
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 35250176 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 249 ms_handle_reset con 0x561088793400 session 0x561088e20700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 249 ms_handle_reset con 0x5610896d4800 session 0x561086cba540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8ead000/0x0/0x4ffc00000, data 0x2e6f0e0/0x2fd9000, compress 0x0/0x0/0x0, omap 0x30cf6, meta 0x3d3f30a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136904704 unmapped: 35241984 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136929280 unmapped: 35217408 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 251 ms_handle_reset con 0x5610899ef400 session 0x561088d008c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 251 ms_handle_reset con 0x5610899efc00 session 0x561087b14e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 35192832 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136962048 unmapped: 35184640 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.698543549s of 10.047485352s, submitted: 230
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 253 ms_handle_reset con 0x5610891a4c00 session 0x561086d48c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1993876 data_alloc: 234881024 data_used: 18271852
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136986624 unmapped: 35160064 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 254 ms_handle_reset con 0x561088793400 session 0x561088e20380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136994816 unmapped: 35151872 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 254 ms_handle_reset con 0x5610896d4800 session 0x5610897eda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f8ea1000/0x0/0x4ffc00000, data 0x2e77da4/0x2fe7000, compress 0x0/0x0/0x0, omap 0x320d3, meta 0x3d3df2d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136994816 unmapped: 35151872 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 255 ms_handle_reset con 0x5610899ef400 session 0x561087b9ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 255 ms_handle_reset con 0x5610899efc00 session 0x561087bad180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137011200 unmapped: 35135488 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8ea0000/0x0/0x4ffc00000, data 0x2e799d4/0x2fea000, compress 0x0/0x0/0x0, omap 0x3238c, meta 0x3d3dc74), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 256 ms_handle_reset con 0x561088d2a400 session 0x561086ecf180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x2e7b604/0x2fed000, compress 0x0/0x0/0x0, omap 0x326bb, meta 0x3d3d945), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 35127296 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 256 handle_osd_map epochs [256,257], i have 257, src has [1,257]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 257 ms_handle_reset con 0x561088793400 session 0x56108947e380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2006123 data_alloc: 234881024 data_used: 18271852
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137060352 unmapped: 35086336 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610899ef400 session 0x561087b9dc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4800 session 0x561086d49c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f8e95000/0x0/0x4ffc00000, data 0x2e7f297/0x2ff5000, compress 0x0/0x0/0x0, omap 0x3325f, meta 0x3d3cda1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 35069952 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 35069952 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x56108bd30400 session 0x561086dfafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610899efc00 session 0x561088e21180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137093120 unmapped: 35053568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896c7800 session 0x561088f46a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4400 session 0x561086290540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 ms_handle_reset con 0x5610896d4800 session 0x56108b73a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561086f18c00 session 0x561089493dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610899ef400 session 0x561086cba000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561088793400 session 0x56108b73b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x561086f18c00 session 0x56108b8c7880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610896d4400 session 0x561087b38380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 41369600 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610896d4800 session 0x561087bad500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f9ae2000/0x0/0x4ffc00000, data 0x222ae5f/0x23a2000, compress 0x0/0x0/0x0, omap 0x336dd, meta 0x3d3c923), peers [0,1] op hist [0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 ms_handle_reset con 0x5610899eb800 session 0x561086d55dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 260 ms_handle_reset con 0x5610896c7800 session 0x56108947e1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 260 ms_handle_reset con 0x56108bd30400 session 0x56108944ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1905745 data_alloc: 234881024 data_used: 11573391
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 41304064 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f9adc000/0x0/0x4ffc00000, data 0x2202abe/0x237b000, compress 0x0/0x0/0x0, omap 0x33bb6, meta 0x3d3c44a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.792389870s of 11.051105499s, submitted: 170
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 261 ms_handle_reset con 0x561086f18c00 session 0x56108b73b340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 261 heartbeat osd_stat(store_statfs(0x4f9ad8000/0x0/0x4ffc00000, data 0x22046b7/0x237d000, compress 0x0/0x0/0x0, omap 0x33d8a, meta 0x3d3c276), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 41246720 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 262 ms_handle_reset con 0x561088793400 session 0x561086dfb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 262 ms_handle_reset con 0x5610896d4400 session 0x561086cbb880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 263 ms_handle_reset con 0x5610896d4800 session 0x561088dfee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130940928 unmapped: 41205760 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f9b0d000/0x0/0x4ffc00000, data 0x2205e2b/0x237d000, compress 0x0/0x0/0x0, omap 0x343e5, meta 0x3d3bc1b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 263 handle_osd_map epochs [263,264], i have 264, src has [1,264]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912743 data_alloc: 234881024 data_used: 11581415
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561086f18c00 session 0x561086cdfa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561088793400 session 0x56108bcfd180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 heartbeat osd_stat(store_statfs(0x4f9b07000/0x0/0x4ffc00000, data 0x2209566/0x2383000, compress 0x0/0x0/0x0, omap 0x33ac1, meta 0x3d3c53f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 41197568 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896c7800 session 0x56108b8c7880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x56108bd30400 session 0x561088d4e700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x561088793400 session 0x561086dfb880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896c7800 session 0x561086ece700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 ms_handle_reset con 0x5610896d4800 session 0x56108bcfc700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x56108bd30400 session 0x561086cba540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 130957312 unmapped: 41189376 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x5610899eb000 session 0x561088e20a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 266 ms_handle_reset con 0x5610896c7800 session 0x5610894921c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610899ebc00 session 0x56108947ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561087b9c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896d4800 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561086f18c00 session 0x561088f47340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925905 data_alloc: 234881024 data_used: 14727159
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135307264 unmapped: 36839424 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.875901222s of 10.079591751s, submitted: 157
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896c7800 session 0x56108944dc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896d4800 session 0x561089493dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610899ebc00 session 0x561088c91180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x56108bd30400 session 0x561088f46700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135249920 unmapped: 36896768 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9afa000/0x0/0x4ffc00000, data 0x220eb6d/0x238e000, compress 0x0/0x0/0x0, omap 0x32c9b, meta 0x3d3d365), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x561088793400 session 0x561087b38380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 ms_handle_reset con 0x5610896c7800 session 0x561088c91dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135258112 unmapped: 36888576 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9afc000/0x0/0x4ffc00000, data 0x220eb8d/0x2390000, compress 0x0/0x0/0x0, omap 0x35fd7, meta 0x3d3a029), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x5610896d4800 session 0x561086cba380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135266304 unmapped: 36880384 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x5610899ebc00 session 0x56108947e380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 268 ms_handle_reset con 0x56108bd31000 session 0x561089493340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134701056 unmapped: 37445632 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 269 ms_handle_reset con 0x561088793400 session 0x561087b9c700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1989818 data_alloc: 234881024 data_used: 14727772
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 37437440 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x2a52dc2/0x2bd7000, compress 0x0/0x0/0x0, omap 0x3780b, meta 0x3d387f5), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896c7800 session 0x561088dfee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896d4800 session 0x561087bad6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610899ebc00 session 0x561087baca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x56108bd31400 session 0x56108944d500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x561088793400 session 0x561087b39180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896c7800 session 0x561086cbafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 ms_handle_reset con 0x5610896d4800 session 0x561088e20700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 271 ms_handle_reset con 0x5610899ebc00 session 0x56108944ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134742016 unmapped: 37404672 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x2215a20/0x239a000, compress 0x0/0x0/0x0, omap 0x379b7, meta 0x3d38649), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x56108bd31800 session 0x561088d001c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946217 data_alloc: 234881024 data_used: 14728727
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x561088793400 session 0x561086d55340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x5610896c7800 session 0x561086cbbc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 37396480 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x5610896d4800 session 0x56108b73bdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 37396480 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 ms_handle_reset con 0x56108bd30800 session 0x56108947f6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.460597038s of 10.737349510s, submitted: 154
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x5610899ebc00 session 0x561088f47180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x561088d2fc00 session 0x561088d4e540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x561088793400 session 0x561086e9b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108bd30800 session 0x56108947e1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9ae9000/0x0/0x4ffc00000, data 0x2219288/0x239f000, compress 0x0/0x0/0x0, omap 0x37e65, meta 0x3d3819b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946443 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9aee000/0x0/0x4ffc00000, data 0x22191e6/0x239e000, compress 0x0/0x0/0x0, omap 0x37e65, meta 0x3d3819b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108be12000 session 0x56108b73b180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 37642240 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 ms_handle_reset con 0x56108be13800 session 0x561088e21180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135553024 unmapped: 36593664 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9aed000/0x0/0x4ffc00000, data 0x22191f6/0x239f000, compress 0x0/0x0/0x0, omap 0x38004, meta 0x3d37ffc), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 273 handle_osd_map epochs [273,274], i have 274, src has [1,274]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1954831 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9ae7000/0x0/0x4ffc00000, data 0x221ac85/0x23a3000, compress 0x0/0x0/0x0, omap 0x38153, meta 0x3d37ead), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9ae7000/0x0/0x4ffc00000, data 0x221ac85/0x23a3000, compress 0x0/0x0/0x0, omap 0x38153, meta 0x3d37ead), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1954831 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088793400 session 0x561088da2fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088d2fc00 session 0x561089493880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.987413406s of 15.062505722s, submitted: 45
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108bd30800 session 0x561086d49c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 36577280 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108be12000 session 0x561086ecf180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x56108be13c00 session 0x561086d54c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 ms_handle_reset con 0x561088793400 session 0x56108947ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 35504128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 274 handle_osd_map epochs [274,275], i have 275, src has [1,275]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f9aea000/0x0/0x4ffc00000, data 0x221ac75/0x23a2000, compress 0x0/0x0/0x0, omap 0x386b4, meta 0x3d3794c), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 275 ms_handle_reset con 0x56108be12000 session 0x561088f46700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 35504128 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108bd30800 session 0x561087b39340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be13c00 session 0x5610894921c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088d2fc00 session 0x561087bad180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962941 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 35487744 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088d2fc00 session 0x561086cba540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x561088793400 session 0x561086cbae00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 35487744 heap: 172146688 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136724480 unmapped: 56426496 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136781824 unmapped: 56369152 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 56344576 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be13c00 session 0x561086ecefc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f2ee3000/0x0/0x4ffc00000, data 0x8e1e8f1/0x8fa9000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [0,0,0,0,1,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739045 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f16e1000/0x0/0x4ffc00000, data 0xa61e963/0xa7ab000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [0,0,0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 55222272 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 heartbeat osd_stat(store_statfs(0x4eeee1000/0x0/0x4ffc00000, data 0xce1e963/0xcfab000, compress 0x0/0x0/0x0, omap 0x38cbb, meta 0x3d37345), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 55197696 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.002960205s of 10.177382469s, submitted: 82
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 142229504 unmapped: 50921472 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108bd30800 session 0x561089493500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be12400 session 0x561086ecee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 ms_handle_reset con 0x56108be12000 session 0x561088e20380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138092544 unmapped: 55058432 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561086ece700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be13c00 session 0x561086dfb880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 55123968 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be12800 session 0x56108bcfd180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088793400 session 0x5610897ec700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x561088d2fc00 session 0x561088e20380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561089493500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325467 data_alloc: 234881024 data_used: 14729883
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137289728 unmapped: 55861248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be12000 session 0x561086ecefc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108be13c00 session 0x561087bad180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 55828480 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 heartbeat osd_stat(store_statfs(0x4e9e8f000/0x0/0x4ffc00000, data 0x11e6e0e6/0x11ffd000, compress 0x0/0x0/0x0, omap 0x39447, meta 0x3d36bb9), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 56074240 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 ms_handle_reset con 0x56108bd30800 session 0x561088f46c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108695c400 session 0x56108b8c7180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137084928 unmapped: 56066048 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x561089705000 session 0x561086d49c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108ca42000 session 0x56108947ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108ca42400 session 0x5610894921c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108695c400 session 0x561086ecee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x561089705000 session 0x561086dfb880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 ms_handle_reset con 0x56108be12c00 session 0x56108b8c76c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108bd30800 session 0x561088e20a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108ca42000 session 0x561086dfa000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108be12000 session 0x561086dfa540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108695c400 session 0x561089493dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561089705000 session 0x561089493340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088793400 session 0x561088da2fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 55443456 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088d2fc00 session 0x561087b9c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x56108695c400 session 0x561087b9ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398680 data_alloc: 234881024 data_used: 14729981
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 55410688 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 heartbeat osd_stat(store_statfs(0x4e9462000/0x0/0x4ffc00000, data 0x1289192a/0x12a26000, compress 0x0/0x0/0x0, omap 0x3997d, meta 0x3d36683), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 ms_handle_reset con 0x561088793400 session 0x56108944c700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 55410688 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.954721451s of 10.590643883s, submitted: 109
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 280 ms_handle_reset con 0x561089705000 session 0x561087b38700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 55386112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 280 ms_handle_reset con 0x56108be12c00 session 0x561088c91340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 280 heartbeat osd_stat(store_statfs(0x4e9460000/0x0/0x4ffc00000, data 0x12893921/0x12a2a000, compress 0x0/0x0/0x0, omap 0x39b3a, meta 0x3d364c6), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108be12000 session 0x5610897ece00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108bd30800 session 0x561089492700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108be12000 session 0x561087b9c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 55353344 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x56108695c400 session 0x561088d4efc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 ms_handle_reset con 0x561088793400 session 0x561088c90a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 55197696 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 282 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c6380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 282 ms_handle_reset con 0x56108695c400 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504440 data_alloc: 234881024 data_used: 14734627
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 138059776 unmapped: 55091200 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 283 ms_handle_reset con 0x56108be12000 session 0x561087bac700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 139444224 unmapped: 53706752 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 146718720 unmapped: 46432256 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 284 ms_handle_reset con 0x56108ca42c00 session 0x561086cbbc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 284 heartbeat osd_stat(store_statfs(0x4dec2e000/0x0/0x4ffc00000, data 0x1d0be4ea/0x1d25c000, compress 0x0/0x0/0x0, omap 0x3b587, meta 0x3d34a79), peers [0,1] op hist [0,0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 49078272 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 44048384 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x56108bd30800 session 0x561086e9ac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x561089705000 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 285 ms_handle_reset con 0x56108ca43000 session 0x561087b9dc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985933 data_alloc: 234881024 data_used: 24481363
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145104896 unmapped: 48046080 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108695c400 session 0x561087b9d340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 48037888 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.811054707s of 10.003942490s, submitted: 466
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108be12000 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 286 ms_handle_reset con 0x56108ca42c00 session 0x561088da2700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145145856 unmapped: 48005120 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 287 ms_handle_reset con 0x561088d2fc00 session 0x56108b8c7340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 47988736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 287 heartbeat osd_stat(store_statfs(0x4d7c29000/0x0/0x4ffc00000, data 0x240c37fb/0x24261000, compress 0x0/0x0/0x0, omap 0x3bc8d, meta 0x3d34373), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 47988736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991803 data_alloc: 234881024 data_used: 24486587
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 heartbeat osd_stat(store_statfs(0x4d7c24000/0x0/0x4ffc00000, data 0x240c52b2/0x24264000, compress 0x0/0x0/0x0, omap 0x3c015, meta 0x3d33feb), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 47955968 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108695c400 session 0x56108b73a1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x561089705000 session 0x561086dfa000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 43466752 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 43343872 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108be12000 session 0x56108bcfca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 ms_handle_reset con 0x56108ca43000 session 0x561086d48380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 heartbeat osd_stat(store_statfs(0x4d6626000/0x0/0x4ffc00000, data 0x245172c2/0x246b7000, compress 0x0/0x0/0x0, omap 0x3c015, meta 0x4ed3feb), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 43991040 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 148545536 unmapped: 44605440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108ca43000 session 0x561088dff180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108695c400 session 0x56108b73ba40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5040814 data_alloc: 234881024 data_used: 25174715
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x561088d2fc00 session 0x56108947f340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x561089705000 session 0x561087b9ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108be12000 session 0x56108944c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149209088 unmapped: 43941888 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 ms_handle_reset con 0x56108be12000 session 0x561087b416c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 43925504 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x56108695c400 session 0x56108947ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x561088d2fc00 session 0x561089492fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x561089705000 session 0x561089492000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 heartbeat osd_stat(store_statfs(0x4d5f43000/0x0/0x4ffc00000, data 0x24c02b2f/0x24da7000, compress 0x0/0x0/0x0, omap 0x3cca7, meta 0x4ed3359), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149258240 unmapped: 43892736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.983293533s of 10.446523666s, submitted: 220
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 ms_handle_reset con 0x56108ca43400 session 0x561086290540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149258240 unmapped: 43892736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 291 ms_handle_reset con 0x561088d2fc00 session 0x561087b41180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 42991616 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 292 ms_handle_reset con 0x56108695c400 session 0x561086cdefc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 292 ms_handle_reset con 0x56108ca43000 session 0x56108b73b340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5128883 data_alloc: 234881024 data_used: 25175372
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 43589632 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 43581440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 43581440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 292 heartbeat osd_stat(store_statfs(0x4d5a74000/0x0/0x4ffc00000, data 0x250ce302/0x25276000, compress 0x0/0x0/0x0, omap 0x3d6b7, meta 0x4ed2949), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149676032 unmapped: 43474944 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 43458560 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5148337 data_alloc: 234881024 data_used: 25183564
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 293 ms_handle_reset con 0x561089705000 session 0x561087bada40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 293 heartbeat osd_stat(store_statfs(0x4d586d000/0x0/0x4ffc00000, data 0x252d1f81/0x2547c000, compress 0x0/0x0/0x0, omap 0x3d7fa, meta 0x4ed2806), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 43401216 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 293 heartbeat osd_stat(store_statfs(0x4d586d000/0x0/0x4ffc00000, data 0x252d1f81/0x2547c000, compress 0x0/0x0/0x0, omap 0x3d7fa, meta 0x4ed2806), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928338051s of 10.054156303s, submitted: 67
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 43393024 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 294 ms_handle_reset con 0x56108be12000 session 0x561086dfafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 43393024 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561088d2fc00 session 0x56108947fdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x56108bcfda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43000 session 0x561086d55c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108695c400 session 0x561086d55dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca42000 session 0x561088c91dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43c00 session 0x56108944ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43800 session 0x561088d4fc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 heartbeat osd_stat(store_statfs(0x4d5868000/0x0/0x4ffc00000, data 0x252d56b9/0x25482000, compress 0x0/0x0/0x0, omap 0x3dcd6, meta 0x4ed232a), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5196183 data_alloc: 251658240 data_used: 29943116
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108695c400 session 0x561086cbb880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x561086dfb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561088d2fc00 session 0x561087badc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43000 session 0x561088da3880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x561089705000 session 0x561086d55880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 ms_handle_reset con 0x56108ca43800 session 0x561087b9cc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 35979264 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 296 ms_handle_reset con 0x56108695c400 session 0x561086cde380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157204480 unmapped: 35946496 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157220864 unmapped: 35930112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 60K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4882 syncs, 3.07 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9331 writes, 36K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 23.64 MB, 0.04 MB/s#012Interval WAL: 9331 writes, 3965 syncs, 2.35 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157220864 unmapped: 35930112 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108ca43c00 session 0x56108b73a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108695c400 session 0x561086dfac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 38641664 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 heartbeat osd_stat(store_statfs(0x4d4f0d000/0x0/0x4ffc00000, data 0x25c2fe61/0x25ddf000, compress 0x0/0x0/0x0, omap 0x3e41d, meta 0x4ed1be3), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5241320 data_alloc: 251658240 data_used: 29989955
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 38641664 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x561086f41800 session 0x561086d55dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896d5800 session 0x5610897ecfc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6c00 session 0x56108bcfcc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 38617088 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.773379326s of 11.058368683s, submitted: 45
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6400 session 0x561088dfe000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c7000 session 0x561087bac700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x56108695c400 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x561086f41800 session 0x56108b8c7340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 ms_handle_reset con 0x5610896c6c00 session 0x561088dff180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 heartbeat osd_stat(store_statfs(0x4d4f0c000/0x0/0x4ffc00000, data 0x25c2fe71/0x25de0000, compress 0x0/0x0/0x0, omap 0x3e708, meta 0x4ed18f8), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 38461440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247586 data_alloc: 251658240 data_used: 30114371
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 38461440 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 38453248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 38453248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4f07000/0x0/0x4ffc00000, data 0x25c318f0/0x25de3000, compress 0x0/0x0/0x0, omap 0x3eab2, meta 0x4ed154e), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896d5800 session 0x56108944c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154394624 unmapped: 38756352 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 38748160 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5250786 data_alloc: 251658240 data_used: 30658115
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 156655616 unmapped: 36495360 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161832960 unmapped: 31318016 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4b2e000/0x0/0x4ffc00000, data 0x266f38f0/0x261be000, compress 0x0/0x0/0x0, omap 0x3f277, meta 0x4ed0d89), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561086dfa000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc400 session 0x561086d48380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x561086cdf880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c8800 session 0x561087b9ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 29237248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610888ab800 session 0x56108944c8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc400 session 0x561088da2540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c8800 session 0x56108bcfca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x5610897edc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4239000/0x0/0x4ffc00000, data 0x26fe38f0/0x26ab3000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4239000/0x0/0x4ffc00000, data 0x26fe38f0/0x26ab3000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5425969 data_alloc: 251658240 data_used: 33798723
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 28295168 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.578448296s of 13.991296768s, submitted: 141
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x561088d2ec00 session 0x561088d4f500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165052416 unmapped: 28098560 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4216000/0x0/0x4ffc00000, data 0x27005913/0x26ad6000, compress 0x0/0x0/0x0, omap 0x3f4f9, meta 0x4ed0b07), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x561089705000 session 0x561089492fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x56108ca43000 session 0x561087b41340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x56108ca43800 session 0x561089492540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169082880 unmapped: 24068096 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610893bc800 session 0x561088da2700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5461090 data_alloc: 251658240 data_used: 39815747
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4259000/0x0/0x4ffc00000, data 0x26fa4913/0x26a75000, compress 0x0/0x0/0x0, omap 0x3f5f6, meta 0x4ed0a0a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171024384 unmapped: 22126592 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 21929984 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 heartbeat osd_stat(store_statfs(0x4d4259000/0x0/0x4ffc00000, data 0x26fa4913/0x26a75000, compress 0x0/0x0/0x0, omap 0x3f5f6, meta 0x4ed0a0a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 ms_handle_reset con 0x5610896c9400 session 0x561088c91a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 21823488 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 299 ms_handle_reset con 0x5610893bc800 session 0x561086e9a8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172507136 unmapped: 20643840 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 299 ms_handle_reset con 0x5610896c9400 session 0x561088f47a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 300 ms_handle_reset con 0x5610896c8800 session 0x561086d548c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 20619264 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5519098 data_alloc: 251658240 data_used: 39840339
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 18997248 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 301 ms_handle_reset con 0x561089705000 session 0x5610897ecc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 18956288 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d3ba8000/0x0/0x4ffc00000, data 0x27735c3b/0x27142000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 18956288 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.297831535s of 10.926359177s, submitted: 71
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173834240 unmapped: 19316736 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173883392 unmapped: 19267584 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5608546 data_alloc: 251658240 data_used: 40423580
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d304e000/0x0/0x4ffc00000, data 0x28288c3b/0x27c95000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 18432000 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 301 heartbeat osd_stat(store_statfs(0x4d2fab000/0x0/0x4ffc00000, data 0x2832cbd9/0x27d38000, compress 0x0/0x0/0x0, omap 0x4031f, meta 0x4ecfce1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 302 ms_handle_reset con 0x56108ca43000 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 302 ms_handle_reset con 0x5610893bc800 session 0x5610897ec540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 302 heartbeat osd_stat(store_statfs(0x4d3ace000/0x0/0x4ffc00000, data 0x277487b9/0x2721c000, compress 0x0/0x0/0x0, omap 0x405ed, meta 0x4ecfa13), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 19718144 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 302 heartbeat osd_stat(store_statfs(0x4d3acd000/0x0/0x4ffc00000, data 0x277487c9/0x2721d000, compress 0x0/0x0/0x0, omap 0x405ed, meta 0x4ecfa13), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 303 ms_handle_reset con 0x5610896c8800 session 0x56108b73a8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173490176 unmapped: 19660800 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 ms_handle_reset con 0x561089705000 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 ms_handle_reset con 0x5610896c9400 session 0x56108bcfdc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5544312 data_alloc: 251658240 data_used: 40427676
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 18554880 heap: 193150976 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 heartbeat osd_stat(store_statfs(0x4d28ff000/0x0/0x4ffc00000, data 0x27770e2c/0x27249000, compress 0x0/0x0/0x0, omap 0x40f22, meta 0x606f0de), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 51765248 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 192200704 unmapped: 34545664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 heartbeat osd_stat(store_statfs(0x4cf8fe000/0x0/0x4ffc00000, data 0x2a770e3b/0x2a24a000, compress 0x0/0x0/0x0, omap 0x40faa, meta 0x606f056), peers [0,1] op hist [0,0,0,0,0,1,3])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175570944 unmapped: 51175424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.440034866s of 10.162814140s, submitted: 189
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184696832 unmapped: 42049536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6979924 data_alloc: 251658240 data_used: 40427948
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 46710784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x561086fbe400 session 0x56108b73b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x56108ca43800 session 0x561087b40a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610893bc800 session 0x56108947f340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610896c8800 session 0x561088dfec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 46489600 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 heartbeat osd_stat(store_statfs(0x4c06ca000/0x0/0x4ffc00000, data 0x399a48aa/0x3947e000, compress 0x0/0x0/0x0, omap 0x4138a, meta 0x606ec76), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180404224 unmapped: 46342144 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x5610896c9400 session 0x561086cbaa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 heartbeat osd_stat(store_statfs(0x4cdaca000/0x0/0x4ffc00000, data 0x2b1a989b/0x2ac82000, compress 0x0/0x0/0x0, omap 0x41412, meta 0x606ebee), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178274304 unmapped: 48472064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 ms_handle_reset con 0x561089705000 session 0x561087b9d880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178274304 unmapped: 48472064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610893bc800 session 0x561086e9afc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x561088d2ec00 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610893bc400 session 0x561086dfafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5647563 data_alloc: 251658240 data_used: 40427948
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x5610896c8800 session 0x56108bcfddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 52772864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x56108ca43800 session 0x56108b8c6380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 ms_handle_reset con 0x561088d2ec00 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 51732480 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 49528832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 307 heartbeat osd_stat(store_statfs(0x4d39f4000/0x0/0x4ffc00000, data 0x26680458/0x26158000, compress 0x0/0x0/0x0, omap 0x418bd, meta 0x606e743), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 307 ms_handle_reset con 0x5610896c9400 session 0x561087bace00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 49332224 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.130283356s of 10.143652916s, submitted: 340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610893bc400 session 0x561088e208c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610893bc800 session 0x561086dfb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610896d5800 session 0x561086cbbc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x561087057c00 session 0x561087b14a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5374520 data_alloc: 251658240 data_used: 37076016
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 ms_handle_reset con 0x5610896d5800 session 0x561087b15880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175996928 unmapped: 50749440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:12:54 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501742056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x5610896c8800 session 0x561088d4f180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561088d2ec00 session 0x56108cc2a000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 50946048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x5610893bc400 session 0x561088d4fdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561087057c00 session 0x56108b73a8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 50864128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 ms_handle_reset con 0x561088d2ec00 session 0x561086e9a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 heartbeat osd_stat(store_statfs(0x4e4161000/0x0/0x4ffc00000, data 0x1383026e/0x139e8000, compress 0x0/0x0/0x0, omap 0x42d55, meta 0x606d2ab), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 310 handle_osd_map epochs [311,311], i have 311, src has [1,311]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 311 ms_handle_reset con 0x5610896c8800 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174563328 unmapped: 52183040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2661024 data_alloc: 251658240 data_used: 36244332
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 313 ms_handle_reset con 0x5610896d5800 session 0x561087b9c8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 313 ms_handle_reset con 0x5610893bc800 session 0x561088f46a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x56108be12c00 session 0x561086cbae00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x56108ca42800 session 0x561088da21c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 54460416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583928108s of 10.055441856s, submitted: 298
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 314 ms_handle_reset con 0x5610893bc800 session 0x56108947f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f664a000/0x0/0x4ffc00000, data 0x3346151/0x3502000, compress 0x0/0x0/0x0, omap 0x43811, meta 0x606c7ef), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2417537 data_alloc: 234881024 data_used: 16199279
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f752b000/0x0/0x4ffc00000, data 0x2462ba6/0x261e000, compress 0x0/0x0/0x0, omap 0x43936, meta 0x606c6ca), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 65781760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f752b000/0x0/0x4ffc00000, data 0x2462ba6/0x261e000, compress 0x0/0x0/0x0, omap 0x43936, meta 0x606c6ca), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 315 ms_handle_reset con 0x561087057c00 session 0x561086dfb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 67551232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 67534848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401471 data_alloc: 234881024 data_used: 14101517
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 315 handle_osd_map epochs [317,317], i have 315, src has [1,317]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 315 handle_osd_map epochs [316,317], i have 315, src has [1,317]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561088d2ec00 session 0x561087bacc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f7727000/0x0/0x4ffc00000, data 0x226503c/0x2423000, compress 0x0/0x0/0x0, omap 0x43dba, meta 0x606c246), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561087057c00 session 0x561087b15880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x56108ca42800 session 0x561087b14a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x56108be12c00 session 0x56108947f340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x5610896c8800 session 0x5610897ec540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 67624960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.965824127s of 10.050975800s, submitted: 67
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x5610896d5800 session 0x561087b40a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 ms_handle_reset con 0x561087057c00 session 0x56108bcfddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 67608576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409832 data_alloc: 234881024 data_used: 14102137
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 67600384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 318 ms_handle_reset con 0x5610896c8800 session 0x561087b15500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x5610896d5800 session 0x56108b73a540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f771e000/0x0/0x4ffc00000, data 0x2268be4/0x242a000, compress 0x0/0x0/0x0, omap 0x443f1, meta 0x606bc0f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f771e000/0x0/0x4ffc00000, data 0x2268be4/0x242a000, compress 0x0/0x0/0x0, omap 0x443f1, meta 0x606bc0f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x56108be12c00 session 0x561089492c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x5610896c9400 session 0x56108b8c6c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x56108ca42800 session 0x561088f46c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 ms_handle_reset con 0x561087057c00 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158375936 unmapped: 68370432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2421042 data_alloc: 234881024 data_used: 14102102
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 68321280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x561088da2380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 68321280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c9400 session 0x561087b39c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896d5800 session 0x561086d54700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x56108bcfca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c9400 session 0x56108947efc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x56108ca42800 session 0x561087bada40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 68255744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x56108be12c00 session 0x561087b9d880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x561087057c00 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 ms_handle_reset con 0x5610896c8800 session 0x561088da2540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.975288391s of 10.086947441s, submitted: 55
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f771b000/0x0/0x4ffc00000, data 0x226a850/0x2430000, compress 0x0/0x0/0x0, omap 0x4480f, meta 0x606b7f1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c9400 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158539776 unmapped: 68206592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108ca42800 session 0x56108b73a540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436001 data_alloc: 234881024 data_used: 14102722
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 68198400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561088da21c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561087b39c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x561087057c00 session 0x561088dff180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 68198400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c8800 session 0x561086d48380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c9400 session 0x561087b396c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108ca42800 session 0x5610897edc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x56108695d800 session 0x561087bace00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158515200 unmapped: 68231168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x226c7e5/0x2434000, compress 0x0/0x0/0x0, omap 0x44b1f, meta 0x606b4e1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 ms_handle_reset con 0x5610896c8800 session 0x5610897ed880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561087057c00 session 0x561086e9b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c9400 session 0x561086cba8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 68222976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561088d3f800 session 0x561086d54700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2438717 data_alloc: 234881024 data_used: 14103370
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x56108695d800 session 0x561088e208c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561087057c00 session 0x561086e9afc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f7713000/0x0/0x4ffc00000, data 0x226e3b3/0x2437000, compress 0x0/0x0/0x0, omap 0x44c47, meta 0x606b3b9), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 68640768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c8800 session 0x561088dffdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x561088d29400 session 0x561088d00540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 68378624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 ms_handle_reset con 0x5610896c9400 session 0x56108b73b6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.958703995s of 10.064065933s, submitted: 89
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108695d800 session 0x56108cc2a000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d2fc00 session 0x561086dfb500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 68378624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108bd30c00 session 0x561087b15340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561087057c00 session 0x561087bacc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d29400 session 0x56108bcfd500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2450039 data_alloc: 234881024 data_used: 14103955
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x56108695d800 session 0x561087b40a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561087057c00 session 0x561087b15500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 ms_handle_reset con 0x561088d2fc00 session 0x561089492c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 68354048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30c00 session 0x561086dfac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x5610896c8800 session 0x561088da2380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108695d800 session 0x561087b9ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 68354048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x561087057c00 session 0x561086cbaa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x561088d2fc00 session 0x561087bada40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f76cb000/0x0/0x4ffc00000, data 0x22b1bdc/0x247f000, compress 0x0/0x0/0x0, omap 0x4545f, meta 0x606aba1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30c00 session 0x56108944dc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158416896 unmapped: 68329472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108bd30000 session 0x561087bace00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 ms_handle_reset con 0x56108695d800 session 0x561088da2540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 68345856 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158433280 unmapped: 68313088 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561087057c00 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088d2fc00 session 0x561086e9afc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2455665 data_alloc: 234881024 data_used: 14104533
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x56108bd30c00 session 0x56108944c1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088793400 session 0x561088d4ee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x561088792c00 session 0x561088d00000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f76cb000/0x0/0x4ffc00000, data 0x22b35c3/0x247f000, compress 0x0/0x0/0x0, omap 0x45ac9, meta 0x606a537), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 68288512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 ms_handle_reset con 0x56108695d800 session 0x561089492fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.161159515s of 10.308839798s, submitted: 95
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158449664 unmapped: 68296704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 326 ms_handle_reset con 0x561088d2fc00 session 0x561089493180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2460706 data_alloc: 234881024 data_used: 14105141
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f76c9000/0x0/0x4ffc00000, data 0x22b5032/0x2481000, compress 0x0/0x0/0x0, omap 0x45c3f, meta 0x606a3c1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158449664 unmapped: 68296704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 327 ms_handle_reset con 0x56108bd30c00 session 0x561088da2c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 68435968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x561086f18c00 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x56108695d800 session 0x561086d488c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 68435968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158326784 unmapped: 68419584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 328 ms_handle_reset con 0x561088792c00 session 0x561088d4e380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561088d2fc00 session 0x561086d48fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x5610899ea400 session 0x56108944d340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561087057c00 session 0x561086dfac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x561088793400 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158384128 unmapped: 68362240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 heartbeat osd_stat(store_statfs(0x4f76be000/0x0/0x4ffc00000, data 0x22ba378/0x248c000, compress 0x0/0x0/0x0, omap 0x46ec3, meta 0x606913d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 ms_handle_reset con 0x56108695d800 session 0x561088d00c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2477005 data_alloc: 234881024 data_used: 14361141
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x56108bd30c00 session 0x561086e9b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 68345856 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561088792c00 session 0x56108bcfda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561087057c00 session 0x561088da3880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158048256 unmapped: 68698112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 ms_handle_reset con 0x561087057c00 session 0x561086cbae00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f76bb000/0x0/0x4ffc00000, data 0x22bbf24/0x2490000, compress 0x0/0x0/0x0, omap 0x4709d, meta 0x6068f63), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088792c00 session 0x561088dfe1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108695d800 session 0x561086ecf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088793400 session 0x561088da3dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 67575808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108bd30c00 session 0x561086d55880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561087057c00 session 0x561086d55500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x56108695d800 session 0x561086d49500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 67575808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 ms_handle_reset con 0x561088792c00 session 0x56108cc2a1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.880310059s of 10.131997108s, submitted: 116
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 332 ms_handle_reset con 0x5610899ea400 session 0x561087b9ce00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 67559424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 332 heartbeat osd_stat(store_statfs(0x4f76f5000/0x0/0x4ffc00000, data 0x227f863/0x2455000, compress 0x0/0x0/0x0, omap 0x475d9, meta 0x6068a27), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2487270 data_alloc: 234881024 data_used: 14105216
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088d2fc00 session 0x561088f46a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x56108695d800 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088793400 session 0x561088dff180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x5610899eb800 session 0x5610897ecfc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 67543040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088792c00 session 0x561086d49500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x56108695d800 session 0x561088c91180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561087057c00 session 0x561088d4fa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 68599808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x5610899eb800 session 0x56108947fa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 ms_handle_reset con 0x561088d2fc00 session 0x561086d55180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088793400 session 0x56108b73a540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x56108695d800 session 0x561087b41340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088792c00 session 0x56108944d340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561087057c00 session 0x561087bac380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 68534272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088793400 session 0x56108b73a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 68509696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 ms_handle_reset con 0x561088d2fc00 session 0x561086d49c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x56108695d800 session 0x561088dfe8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f76eb000/0x0/0x4ffc00000, data 0x2284d5e/0x245d000, compress 0x0/0x0/0x0, omap 0x47e85, meta 0x606817b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 68509696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x561087057c00 session 0x561086dfafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 335 ms_handle_reset con 0x561088793400 session 0x561087b40000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2509894 data_alloc: 234881024 data_used: 14762005
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899ea400 session 0x561086ecf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899eb800 session 0x561087b41180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 67919872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561088792c00 session 0x561088da2000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 67903488 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561087057c00 session 0x56108944ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x561088793400 session 0x56108cc2b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 ms_handle_reset con 0x5610899ea400 session 0x561086290fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 337 ms_handle_reset con 0x5610896d5c00 session 0x561088d00000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 67633152 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x5610899eb000 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x561087057c00 session 0x561086291180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x56108695d800 session 0x561088da3a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 338 ms_handle_reset con 0x561088792c00 session 0x561087b38380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f7678000/0x0/0x4ffc00000, data 0x22f62bf/0x24d2000, compress 0x0/0x0/0x0, omap 0x487e3, meta 0x606781d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 339 ms_handle_reset con 0x561088793400 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2521801 data_alloc: 234881024 data_used: 14762087
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 339 ms_handle_reset con 0x56108695d800 session 0x56108b73aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 68493312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.879584312s of 12.438203812s, submitted: 176
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561087057c00 session 0x561087bacc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561088793400 session 0x561088c91dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x561088792c00 session 0x561086dfb180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x5610899eb000 session 0x561087b14a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f7677000/0x0/0x4ffc00000, data 0x22f825b/0x24d5000, compress 0x0/0x0/0x0, omap 0x48913, meta 0x60676ed), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 ms_handle_reset con 0x56108695d800 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f766f000/0x0/0x4ffc00000, data 0x22fb94a/0x24db000, compress 0x0/0x0/0x0, omap 0x49d60, meta 0x60662a0), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561087057c00 session 0x56108b73aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2532107 data_alloc: 234881024 data_used: 14762087
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 68476928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561088793400 session 0x561088d4fc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x561088792c00 session 0x561088d4e380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 342 ms_handle_reset con 0x5610899ea400 session 0x561086d49880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561087057c00 session 0x561087b416c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561088793400 session 0x561089492000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 68468736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 343 ms_handle_reset con 0x561088792c00 session 0x56108cc2bdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x56108695d800 session 0x56108947efc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x5610896d4c00 session 0x56108944d180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 68468736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 344 ms_handle_reset con 0x561087057c00 session 0x561088da3880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 344 handle_osd_map epochs [344,345], i have 345, src has [1,345]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 345 ms_handle_reset con 0x561088792c00 session 0x56108947ee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 69001216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2546297 data_alloc: 234881024 data_used: 14911079
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x561088793400 session 0x561086d55a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f765e000/0x0/0x4ffc00000, data 0x2304353/0x24ec000, compress 0x0/0x0/0x0, omap 0x4aba5, meta 0x606545b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610896d4800 session 0x561087b38e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610896d4400 session 0x561087b41dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 ms_handle_reset con 0x5610899ee800 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.287918091s of 10.447829247s, submitted: 97
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x561087057c00 session 0x561086ecee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x56108695d800 session 0x56108944c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 347 ms_handle_reset con 0x5610896d4c00 session 0x561087bada40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561088792c00 session 0x561086cdf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561088793400 session 0x561088dffdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 68771840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x561087057c00 session 0x561088d00e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x5610899ee800 session 0x56108bcfd500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x56108695d800 session 0x561088e208c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f765a000/0x0/0x4ffc00000, data 0x2307a7f/0x24f0000, compress 0x0/0x0/0x0, omap 0x4b103, meta 0x6064efd), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 ms_handle_reset con 0x5610896d4800 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x56108695d800 session 0x56108b8c6c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x5610896d4c00 session 0x56108b73a1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 68763648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x561087057c00 session 0x561086dfac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2553638 data_alloc: 234881024 data_used: 14912207
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x561088793400 session 0x56108944da40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f7656000/0x0/0x4ffc00000, data 0x2309627/0x24f2000, compress 0x0/0x0/0x0, omap 0x4b74c, meta 0x60648b4), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 68755456 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 ms_handle_reset con 0x5610896d4800 session 0x56108944ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 68739072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 350 ms_handle_reset con 0x56108695d800 session 0x56108b73a1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158015488 unmapped: 68730880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 351 ms_handle_reset con 0x5610896d4800 session 0x561088d016c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 352 ms_handle_reset con 0x561088793400 session 0x561088f47a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158031872 unmapped: 68714496 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 352 ms_handle_reset con 0x561087057c00 session 0x561086e9b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 352 handle_osd_map epochs [352,353], i have 353, src has [1,353]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 353 ms_handle_reset con 0x5610899ee800 session 0x561087b40a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 353 ms_handle_reset con 0x5610896d4c00 session 0x561088f46000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158040064 unmapped: 68706304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2562466 data_alloc: 234881024 data_used: 14764413
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 158064640 unmapped: 68681728 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561087057c00 session 0x561088dffdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x56108695d800 session 0x561088d4fc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x5610896d4800 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f76b7000/0x0/0x4ffc00000, data 0x22a46c1/0x2493000, compress 0x0/0x0/0x0, omap 0x4c222, meta 0x6063dde), peers [0,1] op hist [0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561088793400 session 0x561088da3dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 ms_handle_reset con 0x561087057c00 session 0x5610897ecfc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610896d4800 session 0x561087b9c8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157212672 unmapped: 69533696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x56108695d800 session 0x561087bacc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610896d4c00 session 0x561088d00000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.121862411s of 10.008955956s, submitted: 191
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 ms_handle_reset con 0x5610899efc00 session 0x561089492fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f76b4000/0x0/0x4ffc00000, data 0x22a7a34/0x2498000, compress 0x0/0x0/0x0, omap 0x4c6ac, meta 0x6063954), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 69468160 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x56108695d800 session 0x561088c90c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x561087057c00 session 0x56108bcfddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 356 ms_handle_reset con 0x5610899ee400 session 0x561088dfe1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x5610896d4800 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2574925 data_alloc: 234881024 data_used: 14766881
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x5610896d4c00 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x56108695d800 session 0x56108b73aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 358 ms_handle_reset con 0x561087057c00 session 0x561088d4e000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 359 ms_handle_reset con 0x5610896d4800 session 0x561088779340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 69410816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f6e0f000/0x0/0x4ffc00000, data 0x2b41a4f/0x2d39000, compress 0x0/0x0/0x0, omap 0x4cf6a, meta 0x6063096), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f6e0f000/0x0/0x4ffc00000, data 0x2b41a4f/0x2d39000, compress 0x0/0x0/0x0, omap 0x4cf6a, meta 0x6063096), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 69410816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 359 handle_osd_map epochs [359,360], i have 360, src has [1,360]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157351936 unmapped: 69394432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899ee400 session 0x56108b73a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899eec00 session 0x5610897ed6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f6e0e000/0x0/0x4ffc00000, data 0x2b43562/0x2d3c000, compress 0x0/0x0/0x0, omap 0x4d3b3, meta 0x6062c4d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2633424 data_alloc: 234881024 data_used: 14767494
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x56108695d800 session 0x561086d55180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.341822624s of 10.002419472s, submitted: 134
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610896d4800 session 0x561086ecf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x561087057c00 session 0x561088d4ec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 69435392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157319168 unmapped: 69427200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 ms_handle_reset con 0x5610899ef400 session 0x56108cc2b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635905 data_alloc: 234881024 data_used: 14767592
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f6e11000/0x0/0x4ffc00000, data 0x2b43552/0x2d3b000, compress 0x0/0x0/0x0, omap 0x4d512, meta 0x6062aee), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 361 ms_handle_reset con 0x5610891a4c00 session 0x561088d4f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 361 ms_handle_reset con 0x561089a47c00 session 0x561088d4f6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 362 ms_handle_reset con 0x5610899ee400 session 0x56108947f180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 362 handle_osd_map epochs [362,363], i have 363, src has [1,363]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x56108695d800 session 0x561086ecee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x561087057c00 session 0x561086d55500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648546 data_alloc: 234881024 data_used: 14768177
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x561089a47800 session 0x561086d481c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 69419008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 ms_handle_reset con 0x5610899ef400 session 0x561087badc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 heartbeat osd_stat(store_statfs(0x4f6e05000/0x0/0x4ffc00000, data 0x2b48815/0x2d47000, compress 0x0/0x0/0x0, omap 0x4de23, meta 0x60621dd), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561089a47800 session 0x561088d4f180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x56108695d800 session 0x561086d488c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 69402624 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561089a47c00 session 0x561088f46000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x561087057c00 session 0x561087b14000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.017593384s of 10.174485207s, submitted: 80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 69386240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 ms_handle_reset con 0x5610899ee400 session 0x561086d55c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 365 ms_handle_reset con 0x5610896d4800 session 0x56108bcfcc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161177600 unmapped: 65568768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f6dfa000/0x0/0x4ffc00000, data 0x2b4c034/0x2d50000, compress 0x0/0x0/0x0, omap 0x4e3fa, meta 0x6061c06), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x5610899ef400 session 0x56108947f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x561089a47800 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719695 data_alloc: 234881024 data_used: 23751676
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 366 ms_handle_reset con 0x561089a47c00 session 0x561087b9d6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 65519616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 367 ms_handle_reset con 0x5610896d4800 session 0x5610897ec000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 367 ms_handle_reset con 0x5610899ee400 session 0x56108bcfd340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 65503232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 368 ms_handle_reset con 0x561089a47400 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 368 ms_handle_reset con 0x5610899ef400 session 0x56108b73b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161251328 unmapped: 65495040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 368 handle_osd_map epochs [368,369], i have 369, src has [1,369]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 369 ms_handle_reset con 0x561089a47800 session 0x561087b41180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161398784 unmapped: 65347584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 369 ms_handle_reset con 0x561089a46000 session 0x561087b15500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2727282 data_alloc: 234881024 data_used: 23752532
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f6df0000/0x0/0x4ffc00000, data 0x2b53069/0x2d5a000, compress 0x0/0x0/0x0, omap 0x4ef1c, meta 0x60610e4), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161398784 unmapped: 65347584 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 370 ms_handle_reset con 0x5610896d4800 session 0x56108b73b340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6df0000/0x0/0x4ffc00000, data 0x2b53069/0x2d5a000, compress 0x0/0x0/0x0, omap 0x4ef1c, meta 0x60610e4), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 65339392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 65339392 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.410625458s of 10.643949509s, submitted: 91
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 370 ms_handle_reset con 0x5610899ee400 session 0x5610897edc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 58990592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6a83000/0x0/0x4ffc00000, data 0x2ec0e54/0x30c9000, compress 0x0/0x0/0x0, omap 0x4f04b, meta 0x6060fb5), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 58834944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2777598 data_alloc: 234881024 data_used: 24203092
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 371 ms_handle_reset con 0x561089a47400 session 0x561086cdea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 372 ms_handle_reset con 0x561089a46c00 session 0x561087b40700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 58703872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x561089a47000 session 0x561086cdee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 58703872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x5610899ef400 session 0x561088f476c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f6749000/0x0/0x4ffc00000, data 0x31f5527/0x3401000, compress 0x0/0x0/0x0, omap 0x4f5e3, meta 0x6060a1d), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 373 ms_handle_reset con 0x5610896d4800 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 58695680 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x5610899ee400 session 0x56108bcfd500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x5610896c9c00 session 0x561088f46380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 58376192 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 374 ms_handle_reset con 0x561089a47400 session 0x561088f47a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 375 ms_handle_reset con 0x561089a46400 session 0x561086cde8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 375 ms_handle_reset con 0x561089a46000 session 0x561089492e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168386560 unmapped: 58359808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2794669 data_alloc: 234881024 data_used: 24203408
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168394752 unmapped: 58351616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610896c9c00 session 0x561087b9ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f673c000/0x0/0x4ffc00000, data 0x31fc59d/0x340e000, compress 0x0/0x0/0x0, omap 0x5011f, meta 0x605fee1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f673c000/0x0/0x4ffc00000, data 0x31fc59d/0x340e000, compress 0x0/0x0/0x0, omap 0x5011f, meta 0x605fee1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.340101242s of 11.138490677s, submitted: 175
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610899ef400 session 0x561088dfe700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 58310656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610896d4800 session 0x561086dfbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 ms_handle_reset con 0x5610899ee400 session 0x561087b38e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 377 ms_handle_reset con 0x5610896c9c00 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2804968 data_alloc: 234881024 data_used: 24204646
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169549824 unmapped: 57196544 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 378 ms_handle_reset con 0x561089a46000 session 0x561086cdf6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 378 ms_handle_reset con 0x561089a46400 session 0x5610897ec540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169615360 unmapped: 57131008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 379 ms_handle_reset con 0x5610896c9c00 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169623552 unmapped: 57122816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6730000/0x0/0x4ffc00000, data 0x3201b3a/0x341a000, compress 0x0/0x0/0x0, omap 0x50a34, meta 0x605f5cc), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 379 ms_handle_reset con 0x5610896d4800 session 0x561088f476c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169623552 unmapped: 57122816 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6730000/0x0/0x4ffc00000, data 0x3201b3a/0x341a000, compress 0x0/0x0/0x0, omap 0x50a34, meta 0x605f5cc), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 57106432 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x561089a47400 session 0x56108b73b340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x561089a46000 session 0x561088c90fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 ms_handle_reset con 0x5610896c9000 session 0x561089492000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2815534 data_alloc: 234881024 data_used: 24206185
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f672f000/0x0/0x4ffc00000, data 0x320361d/0x341d000, compress 0x0/0x0/0x0, omap 0x50b67, meta 0x605f499), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169648128 unmapped: 57098240 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x561089a47000 session 0x561087b39500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x5610896c9c00 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 57090048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 ms_handle_reset con 0x5610899ee400 session 0x56108944c1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6729000/0x0/0x4ffc00000, data 0x3205247/0x3421000, compress 0x0/0x0/0x0, omap 0x50fdd, meta 0x605f023), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 57090048 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f6729000/0x0/0x4ffc00000, data 0x3205247/0x3421000, compress 0x0/0x0/0x0, omap 0x50fdd, meta 0x605f023), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 382 ms_handle_reset con 0x5610896d4800 session 0x561087b9d6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 57065472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 382 ms_handle_reset con 0x561089a46000 session 0x561086cde8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.642509460s of 10.354652405s, submitted: 105
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 383 ms_handle_reset con 0x5610896c9c00 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 57040896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x561089a47400 session 0x561086ecf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610899ee400 session 0x561086cba8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610896c8c00 session 0x56108947e000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833691 data_alloc: 234881024 data_used: 24206583
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 384 ms_handle_reset con 0x5610896d4800 session 0x561086d54540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 56885248 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 ms_handle_reset con 0x5610896c9800 session 0x561086ecfc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 ms_handle_reset con 0x561089a47000 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169869312 unmapped: 56877056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f671f000/0x0/0x4ffc00000, data 0x320c173/0x342b000, compress 0x0/0x0/0x0, omap 0x51b3b, meta 0x605e4c5), peers [0,1] op hist [0,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171016192 unmapped: 55730176 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896c8c00 session 0x561086d49500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896c9c00 session 0x56108bcfcc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170844160 unmapped: 55902208 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 386 ms_handle_reset con 0x5610896d4800 session 0x561088d4f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610896c9800 session 0x561086d55880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610896c8c00 session 0x561088f47880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 387 ms_handle_reset con 0x5610899ee400 session 0x56108bcfda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 55533568 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2842749 data_alloc: 234881024 data_used: 24210388
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171278336 unmapped: 55468032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f66ee000/0x0/0x4ffc00000, data 0x323b66e/0x345a000, compress 0x0/0x0/0x0, omap 0x5231d, meta 0x605dce3), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171286528 unmapped: 55459840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f66ee000/0x0/0x4ffc00000, data 0x323b66e/0x345a000, compress 0x0/0x0/0x0, omap 0x5231d, meta 0x605dce3), peers [0,1] op hist [0,0,0,0,1,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 389 ms_handle_reset con 0x561089a47400 session 0x5610897eda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 389 ms_handle_reset con 0x5610896c8000 session 0x561088f47180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 55435264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 55435264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845989 data_alloc: 234881024 data_used: 24262514
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.512095451s of 11.085718155s, submitted: 186
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f66eb000/0x0/0x4ffc00000, data 0x323ed67/0x345f000, compress 0x0/0x0/0x0, omap 0x529d2, meta 0x605d62e), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 390 ms_handle_reset con 0x5610896c8c00 session 0x56108947e1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 55427072 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 391 ms_handle_reset con 0x5610899ee400 session 0x561086cde380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871510 data_alloc: 234881024 data_used: 24905586
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f6632000/0x0/0x4ffc00000, data 0x32f683a/0x3518000, compress 0x0/0x0/0x0, omap 0x52e65, meta 0x605d19b), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 392 ms_handle_reset con 0x561089a47400 session 0x56108944d6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 55951360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 393 ms_handle_reset con 0x561088d25c00 session 0x561089493dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 393 ms_handle_reset con 0x5610896c9800 session 0x5610897ed6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 394 ms_handle_reset con 0x561088d25c00 session 0x561086cdfa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f6624000/0x0/0x4ffc00000, data 0x32fbbb6/0x3521000, compress 0x0/0x0/0x0, omap 0x53560, meta 0x605caa0), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 55943168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 396 ms_handle_reset con 0x5610896c8c00 session 0x561086ecf180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 396 ms_handle_reset con 0x5610899ee400 session 0x561088f47a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885857 data_alloc: 234881024 data_used: 24910295
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171253760 unmapped: 55492608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f6621000/0x0/0x4ffc00000, data 0x32ff25d/0x3527000, compress 0x0/0x0/0x0, omap 0x54a2d, meta 0x605b5d3), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885857 data_alloc: 234881024 data_used: 24910295
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 55484416 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.559764862s of 16.016971588s, submitted: 118
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f6625000/0x0/0x4ffc00000, data 0x32ff25d/0x3527000, compress 0x0/0x0/0x0, omap 0x54a2d, meta 0x605b5d3), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 55320576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f6620000/0x0/0x4ffc00000, data 0x3300d14/0x352a000, compress 0x0/0x0/0x0, omap 0x54b61, meta 0x605b49f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 397 ms_handle_reset con 0x561089a47400 session 0x56108947f6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2887447 data_alloc: 234881024 data_used: 24910295
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 55312384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f6620000/0x0/0x4ffc00000, data 0x3300d14/0x352a000, compress 0x0/0x0/0x0, omap 0x54b61, meta 0x605b49f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171474944 unmapped: 55271424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 398 ms_handle_reset con 0x5610891a4000 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 398 ms_handle_reset con 0x561088d25c00 session 0x561086cba8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171491328 unmapped: 55255040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610899ee400 session 0x561088dfee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610896c8c00 session 0x561086cba540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171360256 unmapped: 55386112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f661c000/0x0/0x4ffc00000, data 0x33044a0/0x3530000, compress 0x0/0x0/0x0, omap 0x55133, meta 0x605aecd), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2893368 data_alloc: 234881024 data_used: 24910908
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x561089a47400 session 0x561086ecf500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.551637650s of 10.607804298s, submitted: 44
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 ms_handle_reset con 0x5610891a9800 session 0x561086cdee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 55377920 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 400 ms_handle_reset con 0x561088d25c00 session 0x56108947f880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172449792 unmapped: 54296576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f660d000/0x0/0x4ffc00000, data 0x33096e3/0x3539000, compress 0x0/0x0/0x0, omap 0x5583f, meta 0x605a7c1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172457984 unmapped: 54288384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610896c9c00 session 0x561086d481c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x561089a47000 session 0x561088da36c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f660d000/0x0/0x4ffc00000, data 0x33096e3/0x3539000, compress 0x0/0x0/0x0, omap 0x5583f, meta 0x605a7c1), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902299 data_alloc: 234881024 data_used: 24910908
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610896c8c00 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172482560 unmapped: 54263808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172482560 unmapped: 54263808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x5610899ee400 session 0x561087baca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 ms_handle_reset con 0x561088d25c00 session 0x561088da2700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172515328 unmapped: 54231040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 403 ms_handle_reset con 0x5610896c8c00 session 0x561087b9c540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 54222848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 403 ms_handle_reset con 0x561089a47000 session 0x56108944c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x5610896c9c00 session 0x561088e20000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892571 data_alloc: 234881024 data_used: 24778060
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f66e9000/0x0/0x4ffc00000, data 0x322cebb/0x345f000, compress 0x0/0x0/0x0, omap 0x5a56e, meta 0x6055a92), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 54214656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059605598s of 10.246625900s, submitted: 110
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x561087057c00 session 0x561088e20540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x56108695d800 session 0x561088d4e540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x561088d25c00 session 0x561088f476c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 ms_handle_reset con 0x5610896c8c00 session 0x561088da2c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172548096 unmapped: 54198272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 54190080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x5610896c9c00 session 0x56108944d500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 54181888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x561089a47000 session 0x56108bcfd340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x561088d25c00 session 0x561087b14e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2895321 data_alloc: 234881024 data_used: 24778025
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 405 ms_handle_reset con 0x5610896c8c00 session 0x561088dff500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x56108695d800 session 0x561088d4ee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166330368 unmapped: 60416000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f7615000/0x0/0x4ffc00000, data 0x2300688/0x2533000, compress 0x0/0x0/0x0, omap 0x5ac4d, meta 0x60553b3), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x5610896c9c00 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561089a47400 session 0x561088f47180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166338560 unmapped: 60407808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x56108695d800 session 0x561088f476c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166338560 unmapped: 60407808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561088d25c00 session 0x561086d55500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x561088d27c00 session 0x561088da2540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 ms_handle_reset con 0x5610896c9c00 session 0x561086cdf880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166379520 unmapped: 60366848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 407 ms_handle_reset con 0x5610896c8c00 session 0x561088dfe540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 408 ms_handle_reset con 0x561088d25c00 session 0x561086cdf340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166379520 unmapped: 60366848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2764811 data_alloc: 234881024 data_used: 14780697
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 408 ms_handle_reset con 0x561088d27c00 session 0x561087b401c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x56108695d800 session 0x561087bad6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x5610896c9c00 session 0x561088c91500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x561088d26000 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f760f000/0x0/0x4ffc00000, data 0x2305955/0x253b000, compress 0x0/0x0/0x0, omap 0x5b58c, meta 0x6054a74), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 ms_handle_reset con 0x561088d25c00 session 0x56108bcfcc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.020929337s of 10.765624046s, submitted: 189
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 60350464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x56108695d800 session 0x56108944d180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x561088d26000 session 0x561089493340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 410 ms_handle_reset con 0x561088d27c00 session 0x561088da2000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166420480 unmapped: 60325888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x5610896c9c00 session 0x561088d01500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x56108695d800 session 0x561088da28c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d25c00 session 0x561086291180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d26000 session 0x561088c91dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d27c00 session 0x561086dfb6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2773857 data_alloc: 234881024 data_used: 14780969
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x5610891adc00 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x56108695d800 session 0x561088c90fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d25c00 session 0x56108947e000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 ms_handle_reset con 0x561088d26000 session 0x561086ecf6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d27c00 session 0x561086cbac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7606000/0x0/0x4ffc00000, data 0x230ac3e/0x2544000, compress 0x0/0x0/0x0, omap 0x5c2f6, meta 0x6053d0a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 60964864 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7606000/0x0/0x4ffc00000, data 0x230ac3e/0x2544000, compress 0x0/0x0/0x0, omap 0x5c2f6, meta 0x6053d0a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x56108695a400 session 0x561086d55340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x5610891ab400 session 0x561086e9a700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x56108695a400 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d25c00 session 0x561086d541c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 ms_handle_reset con 0x561088d26000 session 0x561088da2e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 60907520 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x56108695d800 session 0x561088da2700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x56108695a400 session 0x561086cbafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x561088d25c00 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2798118 data_alloc: 234881024 data_used: 14781269
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f73d5000/0x0/0x4ffc00000, data 0x253a82e/0x2775000, compress 0x0/0x0/0x0, omap 0x5ca97, meta 0x6053569), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x561088d26000 session 0x5610897ec540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 60866560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 ms_handle_reset con 0x5610891ab400 session 0x561086d541c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.274993896s of 10.770775795s, submitted: 171
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 60817408 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 415 ms_handle_reset con 0x561088d27c00 session 0x561086d55340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 60817408 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 415 ms_handle_reset con 0x56108695a400 session 0x561088da2540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808176 data_alloc: 234881024 data_used: 14781285
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x561088d25c00 session 0x561088dfe540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f73cc000/0x0/0x4ffc00000, data 0x253fa1f/0x277e000, compress 0x0/0x0/0x0, omap 0x5d554, meta 0x6052aac), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x561088d26000 session 0x561088d00e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 416 ms_handle_reset con 0x5610891ab400 session 0x561088c91500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x561086f19400 session 0x56108bcfcc40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x56108695a400 session 0x561086291180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x561088d25c00 session 0x561086dfb6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812235 data_alloc: 234881024 data_used: 14782224
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 ms_handle_reset con 0x5610891ab400 session 0x561086d54540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165937152 unmapped: 60809216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f73c9000/0x0/0x4ffc00000, data 0x2541661/0x2781000, compress 0x0/0x0/0x0, omap 0x5d6f4, meta 0x605290c), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561088d3f400 session 0x56108944c700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561088d26000 session 0x561088d4f500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x56108695a400 session 0x561089493c00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165945344 unmapped: 60801024 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x5610891ab400 session 0x561086e9a380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.983626366s of 10.283070564s, submitted: 60
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 ms_handle_reset con 0x561086f03c00 session 0x561088d01880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 60784640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f73c6000/0x0/0x4ffc00000, data 0x25431ff/0x2784000, compress 0x0/0x0/0x0, omap 0x5d81e, meta 0x60527e2), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 420 ms_handle_reset con 0x561088d2e800 session 0x561088e20700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836700 data_alloc: 234881024 data_used: 16977171
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165978112 unmapped: 60768256 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 420 ms_handle_reset con 0x561086eca800 session 0x561088f47880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 60792832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x561086eca800 session 0x561087b15dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 60792832 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x56108695a400 session 0x561087b14380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 421 ms_handle_reset con 0x561086f03c00 session 0x561086d49880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 421 handle_osd_map epochs [421,422], i have 422, src has [1,422]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f73b8000/0x0/0x4ffc00000, data 0x2549edd/0x2790000, compress 0x0/0x0/0x0, omap 0x5e905, meta 0x60516fb), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 nova_compute[239846]: 2026-02-02 18:12:54.949 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2841652 data_alloc: 234881024 data_used: 16977171
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 60776448 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 166600704 unmapped: 60145664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 423 ms_handle_reset con 0x561088d2e800 session 0x561088d00380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 56213504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f6bb7000/0x0/0x4ffc00000, data 0x2d42af7/0x2f8b000, compress 0x0/0x0/0x0, omap 0x5ea31, meta 0x60515cf), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.865922928s of 10.292457581s, submitted: 117
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x5610891ab400 session 0x56108947e000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899028 data_alloc: 234881024 data_used: 17101173
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x5610891ab400 session 0x561087b15180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 424 ms_handle_reset con 0x56108695a400 session 0x561089493340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 425 ms_handle_reset con 0x561086eca800 session 0x561088e21500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f6bb4000/0x0/0x4ffc00000, data 0x2d4c104/0x2f96000, compress 0x0/0x0/0x0, omap 0x5f0c7, meta 0x6050f39), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900308 data_alloc: 234881024 data_used: 17105443
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f6bb4000/0x0/0x4ffc00000, data 0x2d4c104/0x2f96000, compress 0x0/0x0/0x0, omap 0x5f0c7, meta 0x6050f39), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 57540608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583072662s of 10.026338577s, submitted: 48
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561088d25c00 session 0x5610894921c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561088d3f400 session 0x56108bcfce00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb1000/0x0/0x4ffc00000, data 0x2d4db83/0x2f99000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x56108695a400 session 0x561088dfee00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 57819136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 57810944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891a9c00 session 0x561087b40e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x2d4db73/0x2f98000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902017 data_alloc: 234881024 data_used: 17105443
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086e9e800 session 0x56108b8c6380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 57810944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086f19800 session 0x56108b8c68c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.400606155s of 17.484048843s, submitted: 4
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891ab800 session 0x56108b73b880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908056 data_alloc: 234881024 data_used: 17142307
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169238528 unmapped: 57507840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908056 data_alloc: 234881024 data_used: 17142307
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f26c, meta 0x6050d94), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.337499619s of 11.355749130s, submitted: 9
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f399, meta 0x6050c67), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917452 data_alloc: 234881024 data_used: 17629731
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169230336 unmapped: 57516032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x561086f19800 session 0x561087b9c1c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 ms_handle_reset con 0x5610891a9c00 session 0x56108944d180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917372 data_alloc: 234881024 data_used: 17626659
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.396843910s of 12.426069260s, submitted: 10
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917084 data_alloc: 234881024 data_used: 17626659
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b8f000/0x0/0x4ffc00000, data 0x2d71b83/0x2fbd000, compress 0x0/0x0/0x0, omap 0x5f4c6, meta 0x6050b3a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 57442304 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169353216 unmapped: 57393152 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 57384960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 426 handle_osd_map epochs [426,427], i have 427, src has [1,427]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x5610891a2400 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 57311232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930167 data_alloc: 234881024 data_used: 19174947
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x2d73782/0x2fc1000, compress 0x0/0x0/0x0, omap 0x5f88d, meta 0x6050773), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 57040896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561088d2ac00 session 0x561087b9da40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561088d2e400 session 0x561088f46a80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 ms_handle_reset con 0x561086e9e400 session 0x56108bcfd340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 56991744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 56991744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 56983552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x56108b8c6000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7d000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169803776 unmapped: 56942592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7d000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949118 data_alloc: 234881024 data_used: 19179043
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169803776 unmapped: 56942592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.832711220s of 11.049080849s, submitted: 47
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088e20000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x56108bcfc540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 56893440 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f6b7f000/0x0/0x4ffc00000, data 0x2e2b33e/0x2fcd000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [0,0,0,0,3,0,1])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137618 data_alloc: 234881024 data_used: 19179043
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a2400 session 0x561087b9ca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086e9e400 session 0x5610897eca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561086ece700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088f46000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x561088da3500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a9c00 session 0x561086d481c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 56475648 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4b3e000/0x0/0x4ffc00000, data 0x4e6b3a0/0x500e000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170311680 unmapped: 56434688 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124766 data_alloc: 234881024 data_used: 19206707
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x5610897ed500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088e21500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2e400 session 0x56108bcfce00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a3000 session 0x56108bcfda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d06400 session 0x56108b8c7a40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 56410112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 56410112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163050 data_alloc: 234881024 data_used: 19206707
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561086e9ba40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169574400 unmapped: 57171968 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561088d4f180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169582592 unmapped: 57163776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4564000/0x0/0x4ffc00000, data 0x54453a0/0x55e8000, compress 0x0/0x0/0x0, omap 0x6069f, meta 0x604f961), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086958000 session 0x561088c90fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169713664 unmapped: 57032704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19000 session 0x56108b8c7dc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.457416534s of 17.052862167s, submitted: 60
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087baac00 session 0x56108944d880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169738240 unmapped: 57008128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f453e000/0x0/0x4ffc00000, data 0x54693d3/0x560e000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169795584 unmapped: 56950784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205337 data_alloc: 234881024 data_used: 20534915
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169795584 unmapped: 56950784 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f453e000/0x0/0x4ffc00000, data 0x54693d3/0x560e000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 56786944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219277 data_alloc: 234881024 data_used: 21404291
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f446c000/0x0/0x4ffc00000, data 0x553b3d3/0x56e0000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 55959552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.492341995s of 10.538371086s, submitted: 18
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 52559872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267287 data_alloc: 234881024 data_used: 21631619
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 52559872 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 39886848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f390b000/0x0/0x4ffc00000, data 0x5d4c3d3/0x5ef1000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x5610891a9c00 session 0x561086d55180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086e9e400 session 0x56108944c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f390b000/0x0/0x4ffc00000, data 0x5d4c3d3/0x5ef1000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 42696704 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561086f19800 session 0x561087b9d6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 42213376 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 42213376 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338255 data_alloc: 234881024 data_used: 23040643
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 184541184 unmapped: 42205184 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561088d2ac00 session 0x561087b40380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f31fd000/0x0/0x4ffc00000, data 0x677a3d3/0x691f000, compress 0x0/0x0/0x0, omap 0x604e1, meta 0x604fb1f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087bab400 session 0x561086ece700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 ms_handle_reset con 0x561087bab400 session 0x561088f46700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 428 handle_osd_map epochs [428,429], i have 429, src has [1,429]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e400 session 0x561087bad500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f3302000/0x0/0x4ffc00000, data 0x66a63c3/0x684a000, compress 0x0/0x0/0x0, omap 0x605ca, meta 0x604fa36), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 43065344 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306134224s of 11.053565025s, submitted: 255
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561086290fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e800 session 0x561086d48fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 43048960 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086f19800 session 0x561088d01880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311260 data_alloc: 234881024 data_used: 22084211
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183615488 unmapped: 43130880 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f332b000/0x0/0x4ffc00000, data 0x65cdfa3/0x6821000, compress 0x0/0x0/0x0, omap 0x60ced, meta 0x604f313), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086958000 session 0x56108944c700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086f19000 session 0x561086cba540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183869440 unmapped: 42876928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e400 session 0x561087b39340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561088d4fc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183877632 unmapped: 42868736 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3304367 data_alloc: 234881024 data_used: 21987971
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086e9e800 session 0x561087b41180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561086958000 session 0x561086cbbdc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179437568 unmapped: 47308800 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x561088d2e400 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x5610891a3000 session 0x5610897ec700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f3d70000/0x0/0x4ffc00000, data 0x5b8af0e/0x5ddb000, compress 0x0/0x0/0x0, omap 0x610d6, meta 0x604ef2a), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 ms_handle_reset con 0x56108695a400 session 0x561086d55880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 51191808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 51191808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175603712 unmapped: 51142656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175603712 unmapped: 51142656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157254219s of 10.557867050s, submitted: 120
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086e9e400 session 0x561088f46c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086958000 session 0x56108947fa40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105032 data_alloc: 234881024 data_used: 13201390
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b82000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x612b7, meta 0x604ed49), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x56108695a400 session 0x561088f476c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561088d2e400 session 0x561088da3500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104440 data_alloc: 234881024 data_used: 13201489
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4b84000/0x0/0x4ffc00000, data 0x4d7797d/0x4fc8000, compress 0x0/0x0/0x0, omap 0x61688, meta 0x604e978), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104440 data_alloc: 234881024 data_used: 13201489
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.654470444s of 10.694202423s, submitted: 26
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x5610891a3000 session 0x56108944d180
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086f19000 session 0x561088dfec40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 51453952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 ms_handle_reset con 0x561086958000 session 0x561088da36c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 48308224 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178446336 unmapped: 48300032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x56108bcfc700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f4b7c000/0x0/0x4ffc00000, data 0x4d795ed/0x4fce000, compress 0x0/0x0/0x0, omap 0x61c5b, meta 0x604e3a5), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178446336 unmapped: 48300032 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2e400 session 0x56108bcfd340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610891a3000 session 0x5610897eddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561087bab400 session 0x561088d4ea80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086958000 session 0x561088c91500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x561088c91340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123416 data_alloc: 234881024 data_used: 16871603
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2ac00 session 0x5610897eca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 48291840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610891a9c00 session 0x5610897ec8c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x5610899edc00 session 0x561088e20e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086fbec00 session 0x561086cbafc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f4b3d000/0x0/0x4ffc00000, data 0x4db964f/0x500f000, compress 0x0/0x0/0x0, omap 0x61c5b, meta 0x604e3a5), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561086958000 session 0x561086e9b340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125646 data_alloc: 234881024 data_used: 16958131
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x56108695a400 session 0x561086cbbc00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 48021504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.913650513s of 11.028878212s, submitted: 42
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 ms_handle_reset con 0x561088d2ac00 session 0x56108947f6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 48013312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x5610891a9c00 session 0x561088f46700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 48013312 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x5610891a9c00 session 0x561087baca80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x561086958000 session 0x56108b8c7500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4b3c000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x62318, meta 0x604dce8), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126570 data_alloc: 234881024 data_used: 16958033
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178749440 unmapped: 47996928 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4b3c000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x62318, meta 0x604dce8), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178765824 unmapped: 47980544 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 ms_handle_reset con 0x56108695a400 session 0x561086e9ac40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 432 handle_osd_map epochs [432,433], i have 433, src has [1,433]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f4b3e000/0x0/0x4ffc00000, data 0x4dbb109/0x500e000, compress 0x0/0x0/0x0, omap 0x6252b, meta 0x604dad5), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 178823168 unmapped: 47923200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 433 ms_handle_reset con 0x561088d2ac00 session 0x56108bcfda40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f4b39000/0x0/0x4ffc00000, data 0x4dbcb88/0x5011000, compress 0x0/0x0/0x0, omap 0x6267a, meta 0x604d986), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187219968 unmapped: 39526400 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561088d28400 session 0x561088c90fc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f4b39000/0x0/0x4ffc00000, data 0x4dbcb88/0x5011000, compress 0x0/0x0/0x0, omap 0x6267a, meta 0x604d986), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561086fbec00 session 0x561086ece000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331166 data_alloc: 234881024 data_used: 18956881
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x561086958000 session 0x561086dfa540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.596965790s of 11.039932251s, submitted: 86
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 ms_handle_reset con 0x56108695a400 session 0x56108bcfc540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 46800896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180011008 unmapped: 46735360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d28400 session 0x561089492700
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f2978000/0x0/0x4ffc00000, data 0x6f79332/0x71d2000, compress 0x0/0x0/0x0, omap 0x62da1, meta 0x604d25f), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180019200 unmapped: 46727168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d2ac00 session 0x56108944c380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338362 data_alloc: 234881024 data_used: 18953825
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180199424 unmapped: 46546944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561088d2e400 session 0x561086d49880
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x5610891a3000 session 0x561088da2c40
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x561086958000 session 0x561088f46000
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 ms_handle_reset con 0x56108695a400 session 0x561086cde380
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 46530560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561086fbec00 session 0x561088da36c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180248576 unmapped: 46497792 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561086958000 session 0x561087bad500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x56108695a400 session 0x561086e9aa80
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x561088d2e400 session 0x561088da2e00
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 46473216 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 ms_handle_reset con 0x5610891a3000 session 0x561087b40540
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d28400 session 0x561087b41500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561086958000 session 0x561088da3500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 46448640 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x56108695a400 session 0x5610897ed6c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d2e400 session 0x561087b9ddc0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351919 data_alloc: 234881024 data_used: 20944465
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x5610891a3000 session 0x561087b15500
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f2972000/0x0/0x4ffc00000, data 0x6f7caa0/0x71d6000, compress 0x0/0x0/0x0, omap 0x6359d, meta 0x604ca63), peers [0,1] op hist [])
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d27400 session 0x56108b8c76c0
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x561088d27000 session 0x561087b39340
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:54 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176480256 unmapped: 50266112 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117007 data_alloc: 234881024 data_used: 18469990
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 50200576 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f49cb000/0x0/0x4ffc00000, data 0x452ea3e/0x4787000, compress 0x0/0x0/0x0, omap 0x63178, meta 0x604ce88), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.358721733s of 15.690736771s, submitted: 139
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 ms_handle_reset con 0x56108695a400 session 0x561086d541c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 437 handle_osd_map epochs [437,438], i have 438, src has [1,438]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x561088d2e400 session 0x561086d488c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x5610891ad800 session 0x561088dffc00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175915008 unmapped: 50831360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 438 ms_handle_reset con 0x5610896c7800 session 0x561086cdf180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x5610891a3000 session 0x561086cba8c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122139 data_alloc: 234881024 data_used: 18473988
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f53bb000/0x0/0x4ffc00000, data 0x45321ae/0x478d000, compress 0x0/0x0/0x0, omap 0x63aa8, meta 0x604c558), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x5610896c7800 session 0x561086d55180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 51093504 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 43466752 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 439 ms_handle_reset con 0x56108695a400 session 0x56108b8c76c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 440 ms_handle_reset con 0x561088d27000 session 0x561088da3500
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 39870464 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 440 ms_handle_reset con 0x561088d2e400 session 0x561088da2700
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 440 heartbeat osd_stat(store_statfs(0x4f3907000/0x0/0x4ffc00000, data 0x4a36d4a/0x4c93000, compress 0x0/0x0/0x0, omap 0x63bbe, meta 0x71ec442), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 440 handle_osd_map epochs [441,441], i have 441, src has [1,441]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 39567360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199074 data_alloc: 234881024 data_used: 19055620
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 39567360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 441 ms_handle_reset con 0x56108695a400 session 0x5610897eca80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 39493632 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 39493632 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f36ae000/0x0/0x4ffc00000, data 0x4c8a959/0x4eea000, compress 0x0/0x0/0x0, omap 0x64128, meta 0x71ebed8), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 441 handle_osd_map epochs [442,442], i have 442, src has [1,442]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.086682320s of 10.323908806s, submitted: 84
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 442 ms_handle_reset con 0x561088d27000 session 0x5610897ec1c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181136 data_alloc: 234881024 data_used: 19059716
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 443 ms_handle_reset con 0x5610896c7800 session 0x561088da2000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 45031424 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x5610891a3000 session 0x561088d4f880
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f3ab7000/0x0/0x4ffc00000, data 0x4c8e163/0x4ef1000, compress 0x0/0x0/0x0, omap 0x64356, meta 0x71ebcaa), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x561088d2e400 session 0x561088d4e540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 45015040 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x5610891a9c00 session 0x561088d008c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x561086958000 session 0x561088f47c00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 44998656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 ms_handle_reset con 0x56108695a400 session 0x561087b15340
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 445 ms_handle_reset con 0x561088d2e400 session 0x56108cc2b180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 44998656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f3ab1000/0x0/0x4ffc00000, data 0x4c9190b/0x4ef7000, compress 0x0/0x0/0x0, omap 0x649dc, meta 0x71eb624), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191814 data_alloc: 234881024 data_used: 19084292
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 446 ms_handle_reset con 0x561088d27000 session 0x56108d0a4fc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f3ab1000/0x0/0x4ffc00000, data 0x4c934b5/0x4ef9000, compress 0x0/0x0/0x0, omap 0x64af4, meta 0x71eb50c), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 44982272 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 447 ms_handle_reset con 0x561086958000 session 0x5610894936c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb  2 13:12:55 np0005605476 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 44965888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x56108695a400 session 0x561086cde8c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x561088d2e400 session 0x56108944c700
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 448 ms_handle_reset con 0x5610891a9c00 session 0x561086d55dc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.562074661s of 10.011097908s, submitted: 148
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x5610891a3000 session 0x56108944c540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x561086958000 session 0x561086cbbdc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 44777472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 449 ms_handle_reset con 0x56108695a400 session 0x56108d0a4000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f3aaa000/0x0/0x4ffc00000, data 0x4c96cf7/0x4f00000, compress 0x0/0x0/0x0, omap 0x6517f, meta 0x71eae81), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 450 ms_handle_reset con 0x5610896c7800 session 0x561089492e00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 44769280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376517 data_alloc: 234881024 data_used: 19084806
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 44769280 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 181993472 unmapped: 44752896 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 44736512 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 451 heartbeat osd_stat(store_statfs(0x4f1adf000/0x0/0x4ffc00000, data 0x6c5bfd5/0x6ec9000, compress 0x0/0x0/0x0, omap 0x65d69, meta 0x71ea297), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 43687936 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382177 data_alloc: 234881024 data_used: 19085663
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f1ade000/0x0/0x4ffc00000, data 0x6c5da9c/0x6ecc000, compress 0x0/0x0/0x0, omap 0x65ebf, meta 0x71ea141), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 ms_handle_reset con 0x561088d2e400 session 0x5610897ec8c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 43679744 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196254730s of 10.522126198s, submitted: 58
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x5610891a9c00 session 0x561086ecfa40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x561086958000 session 0x561089493340
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x56108695a400 session 0x561088dff340
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 43671552 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 ms_handle_reset con 0x561088d2e400 session 0x56108944d500
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388530 data_alloc: 234881024 data_used: 19085679
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 43655168 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f1ada000/0x0/0x4ffc00000, data 0x6c5f583/0x6ed0000, compress 0x0/0x0/0x0, omap 0x666c2, meta 0x71e993e), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3468914 data_alloc: 251658240 data_used: 32346065
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 453 handle_osd_map epochs [453,454], i have 454, src has [1,454]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 454 ms_handle_reset con 0x561088d2dc00 session 0x561086290fc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193036288 unmapped: 33710080 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f1ad4000/0x0/0x4ffc00000, data 0x6c62d0f/0x6ed6000, compress 0x0/0x0/0x0, omap 0x66d5f, meta 0x71e92a1), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.200019836s of 11.241814613s, submitted: 38
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 455 ms_handle_reset con 0x561087baa000 session 0x561088f46c40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3476024 data_alloc: 251658240 data_used: 32346065
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f1ad4000/0x0/0x4ffc00000, data 0x6c62d0f/0x6ed6000, compress 0x0/0x0/0x0, omap 0x66d5f, meta 0x71e92a1), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 456 ms_handle_reset con 0x561086958000 session 0x56108944d340
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 33701888 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 456 ms_handle_reset con 0x56108695a400 session 0x561088dff180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 22429696 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 456 handle_osd_map epochs [456,457], i have 457, src has [1,457]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 457 ms_handle_reset con 0x561087baa000 session 0x561088c91a40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 20643840 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 19472384 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 457 ms_handle_reset con 0x561088d2dc00 session 0x561088d00000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614725 data_alloc: 251658240 data_used: 34719697
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 19423232 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 457 heartbeat osd_stat(store_statfs(0x4ef54f000/0x0/0x4ffc00000, data 0x7e354b7/0x80ab000, compress 0x0/0x0/0x0, omap 0x66f93, meta 0x838906d), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207413248 unmapped: 19333120 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 19275776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 19275776 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 459 ms_handle_reset con 0x561088d2e400 session 0x561088dfea80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 24207360 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3611079 data_alloc: 251658240 data_used: 34723793
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x561086958000 session 0x561086d49c00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ef754000/0x0/0x4ffc00000, data 0x7e3a6fa/0x80b4000, compress 0x0/0x0/0x0, omap 0x6778d, meta 0x8388873), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202555392 unmapped: 24190976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926978111s of 11.327394485s, submitted: 171
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x56108695a400 session 0x561088d00540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202555392 unmapped: 24190976 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x5610896c7800 session 0x561086dfaa80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 24174592 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x5610891ad800 session 0x561088dfe700
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 ms_handle_reset con 0x561088d2dc00 session 0x56108b73afc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 460 handle_osd_map epochs [460,461], i have 461, src has [1,461]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202596352 unmapped: 24150016 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 461 ms_handle_reset con 0x561087baa000 session 0x561089492c40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 461 heartbeat osd_stat(store_statfs(0x4ef753000/0x0/0x4ffc00000, data 0x7e3c304/0x80b7000, compress 0x0/0x0/0x0, omap 0x67e21, meta 0x83881df), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 461 ms_handle_reset con 0x561086958000 session 0x561086ecfa40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x56108695a400 session 0x56108944c540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616571 data_alloc: 251658240 data_used: 34724476
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x5610891ad800 session 0x56108d0a4fc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 462 heartbeat osd_stat(store_statfs(0x4ef751000/0x0/0x4ffc00000, data 0x7e3de92/0x80b9000, compress 0x0/0x0/0x0, omap 0x67f3b, meta 0x83880c5), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 462 ms_handle_reset con 0x5610896c7800 session 0x5610887788c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 24109056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 464 ms_handle_reset con 0x561086958000 session 0x56108b8c76c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3624828 data_alloc: 251658240 data_used: 34725061
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 464 ms_handle_reset con 0x56108695a400 session 0x561088da21c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 24076288 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561087baa000 session 0x561086d55180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 heartbeat osd_stat(store_statfs(0x4ef748000/0x0/0x4ffc00000, data 0x7e41547/0x80c0000, compress 0x0/0x0/0x0, omap 0x68664, meta 0x838799c), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.062519073s of 10.297443390s, submitted: 79
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x5610891ad800 session 0x561088f47500
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3625502 data_alloc: 251658240 data_used: 34725820
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561088d2e400 session 0x5610894936c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 heartbeat osd_stat(store_statfs(0x4ef748000/0x0/0x4ffc00000, data 0x7e430d5/0x80c2000, compress 0x0/0x0/0x0, omap 0x6877f, meta 0x8387881), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x561086958000 session 0x56108944c700
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202711040 unmapped: 24035328 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 ms_handle_reset con 0x56108695a400 session 0x561086cbbdc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 465 handle_osd_map epochs [465,466], i have 466, src has [1,466]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x56108b8c6fc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088da2fc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958c00 session 0x561086d54a80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x56108b73a1c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630068 data_alloc: 251658240 data_used: 35159996
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4ef745000/0x0/0x4ffc00000, data 0x7e44b54/0x80c5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 24305664 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088e20e00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.169008255s of 12.200960159s, submitted: 17
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x561087b15c00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4ef745000/0x0/0x4ffc00000, data 0x7e44b54/0x80c5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3607720 data_alloc: 251658240 data_used: 34858940
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x56108cc2a380
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb45000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb45000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3607720 data_alloc: 251658240 data_used: 34858940
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 24240128 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 202850304 unmapped: 23896064 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614752 data_alloc: 251658240 data_used: 36005820
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3614752 data_alloc: 251658240 data_used: 36005820
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203194368 unmapped: 23552000 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203964416 unmapped: 22781952 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x56108695a400 session 0x56108b8c7a40
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.193916321s of 20.199316025s, submitted: 2
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x561088dff340
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x561087b9d180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619672 data_alloc: 251658240 data_used: 36943804
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4efb47000/0x0/0x4ffc00000, data 0x7a44b54/0x7cc5000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 203702272 unmapped: 23044096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x56108947f6c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x561087b40540
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325979214' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2917000/0x0/0x4ffc00000, data 0x4c75ae2/0x4ef4000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285644 data_alloc: 234881024 data_used: 20569516
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 196575232 unmapped: 30171136 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.148120880s of 17.195735931s, submitted: 26
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561087b9c380
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561086958000 session 0x561086e9afc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561087baa000 session 0x561086ecea80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x561088d01880
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x5610891ad800 session 0x561088f47dc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323678 data_alloc: 234881024 data_used: 20573514
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561089705000 session 0x561088e20000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3364254 data_alloc: 251658240 data_used: 27454794
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3364254 data_alloc: 251658240 data_used: 27454794
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2288000/0x0/0x4ffc00000, data 0x5305ae2/0x5584000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 29122560 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.381216049s of 16.466739655s, submitted: 3
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 27443200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 27443200 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2f000/0x0/0x4ffc00000, data 0x595eae2/0x5bdd000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3409358 data_alloc: 251658240 data_used: 27585866
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2f000/0x0/0x4ffc00000, data 0x595eae2/0x5bdd000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3409358 data_alloc: 251658240 data_used: 27585866
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 ms_handle_reset con 0x561088d2e400 session 0x5610897ec540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3411064 data_alloc: 251658240 data_used: 27585866
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 27435008 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.991856575s of 14.100547791s, submitted: 33
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 27369472 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f1c2e000/0x0/0x4ffc00000, data 0x595eb44/0x5bde000, compress 0x0/0x0/0x0, omap 0x68912, meta 0x83876ee), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199589888 unmapped: 27156480 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199606272 unmapped: 27140096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199606272 unmapped: 27140096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3419907 data_alloc: 251658240 data_used: 27585866
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199655424 unmapped: 27090944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f1c0c000/0x0/0x4ffc00000, data 0x597d6e0/0x5bfe000, compress 0x0/0x0/0x0, omap 0x68a2d, meta 0x83875d3), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199655424 unmapped: 27090944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 26869760 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891ad800 session 0x561087bad500
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425391 data_alloc: 251658240 data_used: 27585866
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 93K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 8783 syncs, 2.73 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8971 writes, 32K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 26.58 MB, 0.04 MB/s#012Interval WAL: 8971 writes, 3901 syncs, 2.30 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599be18/0x5c1d000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 199925760 unmapped: 26820608 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425903 data_alloc: 251658240 data_used: 27688266
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.739727020s of 13.804318428s, submitted: 30
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610899ecc00 session 0x561087b40a80
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1be3000/0x0/0x4ffc00000, data 0x59a7e18/0x5c29000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 200056832 unmapped: 26689536 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x5610887788c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3430433 data_alloc: 251658240 data_used: 27688266
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1bdf000/0x0/0x4ffc00000, data 0x59a8e8a/0x5c2c000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 25632768 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2a800 session 0x561086dfb880
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2e400 session 0x561088d4f880
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3438302 data_alloc: 251658240 data_used: 27688266
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3438814 data_alloc: 251658240 data_used: 27790666
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x56108bcfdc00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891ad800 session 0x561087b14380
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f1b57000/0x0/0x4ffc00000, data 0x5a30e8a/0x5cb4000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 25042944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610899ecc00 session 0x561086cba000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.265277863s of 18.318876266s, submitted: 23
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a5800 session 0x561086d55880
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x561088d2e400 session 0x561087b9d6c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3429720 data_alloc: 251658240 data_used: 27790666
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 25591808 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 ms_handle_reset con 0x5610891a2800 session 0x56108947ee00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 469 handle_osd_map epochs [469,470], i have 470, src has [1,470]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 470 ms_handle_reset con 0x5610891ad800 session 0x56108b73b500
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f1bed000/0x0/0x4ffc00000, data 0x599ce18/0x5c1e000, compress 0x0/0x0/0x0, omap 0x690e5, meta 0x8386f1b), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 25583616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 470 ms_handle_reset con 0x5610899ecc00 session 0x56108b73b180
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 25583616 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 471 ms_handle_reset con 0x561088d2a400 session 0x561088d4e540
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 25550848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 471 ms_handle_reset con 0x561088d2a400 session 0x561086e9a380
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 25550848 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3436132 data_alloc: 251658240 data_used: 27794664
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561088d2e400 session 0x561088da3dc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201203712 unmapped: 25542656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x5610891a2800 session 0x5610897ece00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 201203712 unmapped: 25542656 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f1be5000/0x0/0x4ffc00000, data 0x59a01e8/0x5c27000, compress 0x0/0x0/0x0, omap 0x6dbbe, meta 0x8382442), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561086958000 session 0x5610897eddc0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561087baa000 session 0x561087b416c0
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 ms_handle_reset con 0x561086958000 session 0x561088e20e00
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3319998 data_alloc: 234881024 data_used: 20578738
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f2905000/0x0/0x4ffc00000, data 0x4c80186/0x4f06000, compress 0x0/0x0/0x0, omap 0x6dbbe, meta 0x8382442), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.971760750s of 14.100159645s, submitted: 87
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323428 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2901000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323428 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2901000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 29229056 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.675070763s of 11.681387901s, submitted: 10
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197558272 unmapped: 29188096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197558272 unmapped: 29188096 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 29138944 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197656576 unmapped: 29089792 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 28778496 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197935104 unmapped: 28811264 heap: 226746368 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'log dump' '{prefix=log dump}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 209035264 unmapped: 28753920 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf dump' '{prefix=perf dump}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf schema' '{prefix=perf schema}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: mgrc ms_handle_reset ms_handle_reset con 0x561086eca000
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/496403208
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/496403208,v1:192.168.122.100:6801/496403208]
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: mgrc handle_mgr_configure stats_period=5
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198402048 unmapped: 39387136 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198402048 unmapped: 39387136 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198402048 unmapped: 39387136 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198410240 unmapped: 39378944 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198418432 unmapped: 39370752 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198418432 unmapped: 39370752 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198426624 unmapped: 39362560 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 39354368 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198443008 unmapped: 39346176 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198451200 unmapped: 39337984 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.482543945s of 299.722412109s, submitted: 90
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198533120 unmapped: 39256064 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198541312 unmapped: 39247872 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198549504 unmapped: 39239680 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198557696 unmapped: 39231488 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 39215104 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 39215104 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 39215104 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 39215104 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 39215104 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198582272 unmapped: 39206912 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198590464 unmapped: 39198720 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198590464 unmapped: 39198720 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197959680 unmapped: 39829504 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 39821312 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 39821312 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 39821312 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197967872 unmapped: 39821312 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197976064 unmapped: 39813120 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197984256 unmapped: 39804928 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 197992448 unmapped: 39796736 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198000640 unmapped: 39788544 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198008832 unmapped: 39780352 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 39772160 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 39763968 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 39763968 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 39763968 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 39763968 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 39755776 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198049792 unmapped: 39739392 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198066176 unmapped: 39723008 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198074368 unmapped: 39714816 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198074368 unmapped: 39714816 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198074368 unmapped: 39714816 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 39706624 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 39706624 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 39706624 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 39706624 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 39706624 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198090752 unmapped: 39698432 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198098944 unmapped: 39690240 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198107136 unmapped: 39682048 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198115328 unmapped: 39673856 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198123520 unmapped: 39665664 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198123520 unmapped: 39665664 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198123520 unmapped: 39665664 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198123520 unmapped: 39665664 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198131712 unmapped: 39657472 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198131712 unmapped: 39657472 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198139904 unmapped: 39649280 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198139904 unmapped: 39649280 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198139904 unmapped: 39649280 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 39641088 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 39632896 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198164480 unmapped: 39624704 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198172672 unmapped: 39616512 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198180864 unmapped: 39608320 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198189056 unmapped: 39600128 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198197248 unmapped: 39591936 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198197248 unmapped: 39591936 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198197248 unmapped: 39591936 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198197248 unmapped: 39591936 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198205440 unmapped: 39583744 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198205440 unmapped: 39583744 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198205440 unmapped: 39583744 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198213632 unmapped: 39575552 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198213632 unmapped: 39575552 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198213632 unmapped: 39575552 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 234881024 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 39567360 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198230016 unmapped: 39559168 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198238208 unmapped: 39550976 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198246400 unmapped: 39542784 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 39534592 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 39534592 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 39534592 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 39534592 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198262784 unmapped: 39526400 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198262784 unmapped: 39526400 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198262784 unmapped: 39526400 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198262784 unmapped: 39526400 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 24K writes, 94K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 24K writes, 8994 syncs, 2.71 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 445 writes, 994 keys, 445 commit groups, 1.0 writes per commit group, ingest: 0.55 MB, 0.00 MB/s#012Interval WAL: 445 writes, 211 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 39518208 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198279168 unmapped: 39510016 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198287360 unmapped: 39501824 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198295552 unmapped: 39493632 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198303744 unmapped: 39485440 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198303744 unmapped: 39485440 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198303744 unmapped: 39485440 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198303744 unmapped: 39485440 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198311936 unmapped: 39477248 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198311936 unmapped: 39477248 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198311936 unmapped: 39477248 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198311936 unmapped: 39477248 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198311936 unmapped: 39477248 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198328320 unmapped: 39460864 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198336512 unmapped: 39452672 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 39436288 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198361088 unmapped: 39428096 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198369280 unmapped: 39419904 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198369280 unmapped: 39419904 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198369280 unmapped: 39419904 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 39411712 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 39403520 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 39403520 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 39403520 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 39403520 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.327301025s of 300.364410400s, submitted: 24
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198393856 unmapped: 39395328 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198393856 unmapped: 39395328 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198459392 unmapped: 39329792 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198483968 unmapped: 39305216 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198492160 unmapped: 39297024 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198500352 unmapped: 39288832 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198508544 unmapped: 39280640 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198516736 unmapped: 39272448 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198524928 unmapped: 39264256 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322708 data_alloc: 218103808 data_used: 20582736
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f2903000/0x0/0x4ffc00000, data 0x4c81c05/0x4f09000, compress 0x0/0x0/0x0, omap 0x6dd53, meta 0x83822ad), peers [0,1] op hist [])
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198533120 unmapped: 39256064 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198189056 unmapped: 39600128 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 39444480 heap: 237789184 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:55 np0005605476 ceph-osd[87792]: do_command 'log dump' '{prefix=log dump}'
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.171 239853 WARNING nova.virt.libvirt.driver [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.172 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3906MB free_disk=59.98776772618294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.172 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.172 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.260 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.261 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.284 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 13:12:55 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19452 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584167635' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100309334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.853 239853 DEBUG oslo_concurrency.processutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.858 239853 DEBUG nova.compute.provider_tree [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed in ProviderTree for provider: a0b0d175-0948-46db-92ba-608ef43a689f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 13:12:55 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19458 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.880 239853 DEBUG nova.scheduler.client.report [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Inventory has not changed for provider a0b0d175-0948-46db-92ba-608ef43a689f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} v 0)
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} : dispatch
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.882 239853 DEBUG nova.compute.resource_tracker [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 13:12:55 np0005605476 nova_compute[239846]: 2026-02-02 18:12:55.883 239853 DEBUG oslo_concurrency.lockutils [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 13:12:55 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 13:12:56 np0005605476 nova_compute[239846]: 2026-02-02 18:12:56.041 239853 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 39 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 13:12:56 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112444430' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 13:12:56 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19462 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} v 0)
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/150068811' entity='mgr.compute-0.hccdnu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.molmny", "name": "rgw_frontends"} : dispatch
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 13:12:56 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332557588' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 13:12:56 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19466 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:57 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19470 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 13:12:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448306338' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 13:12:57 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 13:12:57 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487068889' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 13:12:57 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19474 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 13:12:58 np0005605476 ceph-mgr[75493]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail
Feb  2 13:12:58 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19478 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:12:58 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  2 13:12:58 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589992821' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb  2 13:12:58 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19480 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:12:58 np0005605476 nova_compute[239846]: 2026-02-02 18:12:58.879 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:58 np0005605476 nova_compute[239846]: 2026-02-02 18:12:58.880 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:59 np0005605476 nova_compute[239846]: 2026-02-02 18:12:59.241 239853 DEBUG oslo_service.periodic_task [None req-2829bb05-3ca0-460a-873d-6129b7c9c50b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 13:12:59 np0005605476 ceph-mgr[75493]: log_channel(audit) log [DBG] : from='client.19484 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 13:12:59 np0005605476 ceph-mon[75197]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 13:12:59 np0005605476 ceph-mon[75197]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115429314' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2a400 session 0x555b29bc9dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2b400 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2a9bd000 session 0x555b28117500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 51642368 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2a000 session 0x555b27d156c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2a400 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2b000 session 0x555b267476c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2ac2b400 session 0x555b267476c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 ms_handle_reset con 0x555b2a9bd000 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138166272 unmapped: 57901056 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 216 ms_handle_reset con 0x555b2ac2a000 session 0x555b28e44fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 216 ms_handle_reset con 0x555b2ac2a400 session 0x555b273b5a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138166272 unmapped: 57901056 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 217 ms_handle_reset con 0x555b2ac2b000 session 0x555b293bbc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138190848 unmapped: 57876480 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 218 ms_handle_reset con 0x555b2ac2ac00 session 0x555b29698c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 218 heartbeat osd_stat(store_statfs(0x4f858e000/0x0/0x4ffc00000, data 0x37b79bd/0x38fa000, compress 0x0/0x0/0x0, omap 0x35bb3, meta 0x3d3a44d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138190848 unmapped: 57876480 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 219 ms_handle_reset con 0x555b2a9bd000 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 219 ms_handle_reset con 0x555b2ac2a000 session 0x555b29699180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2080161 data_alloc: 234881024 data_used: 15426982
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 219 ms_handle_reset con 0x555b2ac2a400 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138190848 unmapped: 57876480 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 219 ms_handle_reset con 0x555b2c171c00 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2ac2b000 session 0x555b293ba540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2a9bd000 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2ac2a000 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138379264 unmapped: 57688064 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b27b82000 session 0x555b293bbdc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2a9bc400 session 0x555b2a436700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138379264 unmapped: 57688064 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2c171c00 session 0x555b27d0fc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137281536 unmapped: 58785792 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.623438835s of 10.866482735s, submitted: 152
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b27b82000 session 0x555b28e1d180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137297920 unmapped: 58769408 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1985056 data_alloc: 234881024 data_used: 12054854
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f9351000/0x0/0x4ffc00000, data 0x29f2c6d/0x2b39000, compress 0x0/0x0/0x0, omap 0x3689f, meta 0x3d39761), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137297920 unmapped: 58769408 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 ms_handle_reset con 0x555b2a9bd000 session 0x555b281b0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 58720256 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 221 ms_handle_reset con 0x555b2ac2a000 session 0x555b28e44380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 58720256 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 221 ms_handle_reset con 0x555b2c171000 session 0x555b293bba40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 221 ms_handle_reset con 0x555b2c170800 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 222 ms_handle_reset con 0x555b2c170c00 session 0x555b2a436fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 222 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d156c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 222 heartbeat osd_stat(store_statfs(0x4f934c000/0x0/0x4ffc00000, data 0x29f4897/0x2b3e000, compress 0x0/0x0/0x0, omap 0x36bdd, meta 0x3d39423), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 58720256 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 58720256 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 224 ms_handle_reset con 0x555b27b82000 session 0x555b29d98700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 224 ms_handle_reset con 0x555b2c170800 session 0x555b27d0efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1949043 data_alloc: 234881024 data_used: 12058915
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 58695680 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 224 ms_handle_reset con 0x555b2a9bd000 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 225 ms_handle_reset con 0x555b27b82000 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x20a7c65/0x21f5000, compress 0x0/0x0/0x0, omap 0x374a8, meta 0x3d38b58), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 225 ms_handle_reset con 0x555b2a9bc400 session 0x555b28116700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 58687488 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139157504 unmapped: 56909824 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f9669000/0x0/0x4ffc00000, data 0x26d4871/0x2823000, compress 0x0/0x0/0x0, omap 0x37665, meta 0x3d3899b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 226 ms_handle_reset con 0x555b2c170800 session 0x555b28ea0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137945088 unmapped: 58122240 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 226 ms_handle_reset con 0x555b2ac2a000 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 227 ms_handle_reset con 0x555b2c171000 session 0x555b28e1c700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137945088 unmapped: 58122240 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.463720322s of 10.692575455s, submitted: 115
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 228 ms_handle_reset con 0x555b27b82000 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 228 ms_handle_reset con 0x555b2c170c00 session 0x555b28116e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2012279 data_alloc: 234881024 data_used: 12849541
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 58204160 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 58187776 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d15dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 58171392 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 ms_handle_reset con 0x555b2ac2a000 session 0x555b293bba40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 heartbeat osd_stat(store_statfs(0x4f9654000/0x0/0x4ffc00000, data 0x26e18cd/0x2838000, compress 0x0/0x0/0x0, omap 0x381fd, meta 0x3d37e03), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138952704 unmapped: 57114624 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 ms_handle_reset con 0x555b2c170800 session 0x555b2a436a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 ms_handle_reset con 0x555b27b82000 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b2a9bc400 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b2ac2a000 session 0x555b273b5a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b2c170c00 session 0x555b28e1d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b29002c00 session 0x555b298c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b27b82000 session 0x555b29b55c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b2a9bc400 session 0x555b28e45dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b2ac2a000 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139272192 unmapped: 56795136 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 ms_handle_reset con 0x555b29002800 session 0x555b28e35c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 231 ms_handle_reset con 0x555b2b3ac800 session 0x555b298916c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2063180 data_alloc: 234881024 data_used: 12850739
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f91ba000/0x0/0x4ffc00000, data 0x2b75422/0x2cd0000, compress 0x0/0x0/0x0, omap 0x3885c, meta 0x3d377a4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 56778752 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b26999400 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b2c170c00 session 0x555b29699a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139313152 unmapped: 56754176 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b27b82000 session 0x555b29890700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139313152 unmapped: 56754176 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b29002800 session 0x555b28117500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b29acdc00 session 0x555b28e45340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 ms_handle_reset con 0x555b29acc000 session 0x555b298c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139591680 unmapped: 56475648 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 ms_handle_reset con 0x555b26999400 session 0x555b28117340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 ms_handle_reset con 0x555b27b82000 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 ms_handle_reset con 0x555b2c170c00 session 0x555b26fd2700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 ms_handle_reset con 0x555b26aa9000 session 0x555b2966cc40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f91ae000/0x0/0x4ffc00000, data 0x2b7a839/0x2cdc000, compress 0x0/0x0/0x0, omap 0x393b5, meta 0x3d36c4b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139591680 unmapped: 56475648 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b29002800 session 0x555b293baa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b27592000 session 0x555b2966dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b27b82000 session 0x555b298908c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b2a9bc400 session 0x555b2966c000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b2ac2a000 session 0x555b2966d500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2081142 data_alloc: 234881024 data_used: 12851965
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139599872 unmapped: 56467456 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139845632 unmapped: 56221696 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b27b82000 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.871068001s of 12.063026428s, submitted: 112
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 ms_handle_reset con 0x555b2a9bc400 session 0x555b28ea0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139845632 unmapped: 56221696 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 235 ms_handle_reset con 0x555b29acc000 session 0x555b28ea0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 235 ms_handle_reset con 0x555b27592c00 session 0x555b27d0f180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 55951360 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f91a4000/0x0/0x4ffc00000, data 0x2b7e536/0x2ce6000, compress 0x0/0x0/0x0, omap 0x398cd, meta 0x3d36733), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 ms_handle_reset con 0x555b27592800 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 ms_handle_reset con 0x555b2c170c00 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 ms_handle_reset con 0x555b27592000 session 0x555b29698c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 ms_handle_reset con 0x555b27592c00 session 0x555b29890fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 ms_handle_reset con 0x555b29002800 session 0x555b28e34000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 55943168 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2119006 data_alloc: 234881024 data_used: 17442061
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 237 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d18380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 237 ms_handle_reset con 0x555b27b82000 session 0x555b29d98c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 237 ms_handle_reset con 0x555b2a9bc400 session 0x555b29d98380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 55943168 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 238 ms_handle_reset con 0x555b27592000 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 55934976 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 239 ms_handle_reset con 0x555b27592c00 session 0x555b27d141c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 239 ms_handle_reset con 0x555b29acc000 session 0x555b298c1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 55902208 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 239 ms_handle_reset con 0x555b27592c00 session 0x555b2948b500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 240 heartbeat osd_stat(store_statfs(0x4f9192000/0x0/0x4ffc00000, data 0x2b854f6/0x2cf4000, compress 0x0/0x0/0x0, omap 0x3a475, meta 0x3d35b8b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 240 ms_handle_reset con 0x555b29acc000 session 0x555b2966d500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 55828480 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 ms_handle_reset con 0x555b27b82000 session 0x555b28e44380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 ms_handle_reset con 0x555b2a9bc400 session 0x555b293bb880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f9192000/0x0/0x4ffc00000, data 0x2b87110/0x2cf8000, compress 0x0/0x0/0x0, omap 0x3a7d2, meta 0x3d3582e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 ms_handle_reset con 0x555b29002800 session 0x555b27d0fc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 ms_handle_reset con 0x555b27592000 session 0x555b29d98700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 55779328 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2148540 data_alloc: 234881024 data_used: 17442646
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 242 ms_handle_reset con 0x555b27b82000 session 0x555b2a437180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 242 ms_handle_reset con 0x555b27592c00 session 0x555b296c0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 242 ms_handle_reset con 0x555b29acc000 session 0x555b29d99880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f918c000/0x0/0x4ffc00000, data 0x2b891ff/0x2cfc000, compress 0x0/0x0/0x0, omap 0x3acd5, meta 0x3d3532b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142630912 unmapped: 53436416 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 52797440 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.884239197s of 10.393527031s, submitted: 196
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 243 ms_handle_reset con 0x555b2c170c00 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 243 ms_handle_reset con 0x555b2a9bc400 session 0x555b27d19340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 243 ms_handle_reset con 0x555b27592000 session 0x555b29890000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143343616 unmapped: 52723712 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8b55000/0x0/0x4ffc00000, data 0x31c67fa/0x3337000, compress 0x0/0x0/0x0, omap 0x3b2bf, meta 0x3d34d41), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143368192 unmapped: 52699136 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 ms_handle_reset con 0x555b27592c00 session 0x555b296c0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 ms_handle_reset con 0x555b2b3ac800 session 0x555b2966c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 52666368 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0x31c9a4e/0x3338000, compress 0x0/0x0/0x0, omap 0x3bad8, meta 0x3d34528), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2181514 data_alloc: 234881024 data_used: 17821078
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 ms_handle_reset con 0x555b2ac2a400 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 ms_handle_reset con 0x555b2ac2b000 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139124736 unmapped: 56942592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 245 ms_handle_reset con 0x555b27592000 session 0x555b27d18a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138452992 unmapped: 57614336 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138395648 unmapped: 57671680 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138395648 unmapped: 57671680 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 248 ms_handle_reset con 0x555b27b82000 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 248 ms_handle_reset con 0x555b27592c00 session 0x555b28e1d180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 57630720 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 249 ms_handle_reset con 0x555b2a9bc400 session 0x555b28e34a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f9883000/0x0/0x4ffc00000, data 0x24959f1/0x2603000, compress 0x0/0x0/0x0, omap 0x3cb17, meta 0x3d334e9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 249 ms_handle_reset con 0x555b27592000 session 0x555b28117a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2065308 data_alloc: 234881024 data_used: 9692952
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138452992 unmapped: 57614336 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138469376 unmapped: 57597952 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 251 ms_handle_reset con 0x555b27b82000 session 0x555b28e35c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 251 ms_handle_reset con 0x555b2b3ac800 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 57589760 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x249ac8b/0x2607000, compress 0x0/0x0/0x0, omap 0x3d218, meta 0x3d32de8), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.992765427s of 10.478529930s, submitted: 302
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 252 ms_handle_reset con 0x555b27592c00 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 252 ms_handle_reset con 0x555b29acc000 session 0x555b27d18380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 252 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x249ac8b/0x2607000, compress 0x0/0x0/0x0, omap 0x3d218, meta 0x3d32de8), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 57507840 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 253 ms_handle_reset con 0x555b27593c00 session 0x555b27d18a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 57507840 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 254 ms_handle_reset con 0x555b27592000 session 0x555b28117a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2077812 data_alloc: 234881024 data_used: 9692756
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 57507840 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 254 ms_handle_reset con 0x555b27592c00 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 57507840 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 254 handle_osd_map epochs [254,255], i have 255, src has [1,255]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 255 ms_handle_reset con 0x555b27b82000 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 255 ms_handle_reset con 0x555b2b3ac800 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 57450496 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 256 ms_handle_reset con 0x555b27592000 session 0x555b28ea0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f986c000/0x0/0x4ffc00000, data 0x24adcd5/0x261e000, compress 0x0/0x0/0x0, omap 0x3df5c, meta 0x3d320a4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f986c000/0x0/0x4ffc00000, data 0x24adcd5/0x261e000, compress 0x0/0x0/0x0, omap 0x3df5c, meta 0x3d320a4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138625024 unmapped: 57442304 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 257 ms_handle_reset con 0x555b27592c00 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 138641408 unmapped: 57425920 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b27b82000 session 0x555b29890c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b27593c00 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b27593800 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2091768 data_alloc: 234881024 data_used: 9693998
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139698176 unmapped: 56369152 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139698176 unmapped: 56369152 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b2ac2b000 session 0x555b28e1d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b27592c00 session 0x555b29699dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b26999400 session 0x555b273b4000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b26aa9000 session 0x555b28e1da40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x24b61e7/0x262c000, compress 0x0/0x0/0x0, omap 0x3ea06, meta 0x3d315fa), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139747328 unmapped: 56320000 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 ms_handle_reset con 0x555b27b82000 session 0x555b28ea01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870788574s of 10.000297546s, submitted: 114
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b26aa9000 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b26999400 session 0x555b28117340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b27593c00 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b27592c00 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b27592400 session 0x555b29699180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b26999400 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 ms_handle_reset con 0x555b26aa9000 session 0x555b29890000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 60375040 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 260 ms_handle_reset con 0x555b2ac2b000 session 0x555b28e45340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 260 ms_handle_reset con 0x555b27592000 session 0x555b28e341c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 60375040 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 260 heartbeat osd_stat(store_statfs(0x4fa352000/0x0/0x4ffc00000, data 0x19bca21/0x1b35000, compress 0x0/0x0/0x0, omap 0x3f684, meta 0x3d3097c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 261 ms_handle_reset con 0x555b27592c00 session 0x555b298c0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2007751 data_alloc: 218103808 data_used: 4728556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 60375040 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 262 ms_handle_reset con 0x555b27593c00 session 0x555b29b55c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 60350464 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 60350464 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 263 ms_handle_reset con 0x555b26999400 session 0x555b27d148c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 263 ms_handle_reset con 0x555b26aa9000 session 0x555b2966d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 263 ms_handle_reset con 0x555b27592000 session 0x555b28ea1dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 60342272 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 60342272 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b27592c00 session 0x555b28ea01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b2ac2b000 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2017067 data_alloc: 218103808 data_used: 4729570
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 heartbeat osd_stat(store_statfs(0x4fa34b000/0x0/0x4ffc00000, data 0x19c38cc/0x1b3d000, compress 0x0/0x0/0x0, omap 0x4024e, meta 0x3d2fdb2), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 60334080 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b26aa9000 session 0x555b27d15500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b26999400 session 0x555b28e1d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b27592000 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 60325888 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b27593c00 session 0x555b296c0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b26999400 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 ms_handle_reset con 0x555b26aa9000 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 60325888 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.087331772s of 10.252768517s, submitted: 123
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 266 ms_handle_reset con 0x555b27592000 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 266 ms_handle_reset con 0x555b29a00400 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 266 ms_handle_reset con 0x555b27260c00 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b2ac2b000 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b29a06400 session 0x555b2966c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 60276736 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b26aa9000 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b26999400 session 0x555b28117dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b27592c00 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135553024 unmapped: 60514304 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b26999400 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b29a00400 session 0x555b26fd2000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b27261c00 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b2701cc00 session 0x555b29890fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b2701d800 session 0x555b29773dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2034751 data_alloc: 218103808 data_used: 4730443
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b26999400 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b2701cc00 session 0x555b29890c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135634944 unmapped: 60432384 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b2701d800 session 0x555b29698000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f919e000/0x0/0x4ffc00000, data 0x19c9514/0x1b4e000, compress 0x0/0x0/0x0, omap 0x412f1, meta 0x4eced0f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 ms_handle_reset con 0x555b29a00400 session 0x555b29891c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 60416000 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b27261c00 session 0x555b273b4000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b281fd400 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b2701cc00 session 0x555b27d148c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b26999400 session 0x555b293bae00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 61153280 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b29a00400 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 ms_handle_reset con 0x555b2b3ab800 session 0x555b281b08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 269 ms_handle_reset con 0x555b2701d800 session 0x555b298c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 269 ms_handle_reset con 0x555b2b3ab800 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b26999400 session 0x555b29773dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 61136896 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701cc00 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b29a00400 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b281fd400 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 61112320 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b26999400 session 0x555b29d98380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701d800 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2106743 data_alloc: 218103808 data_used: 4731469
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b27b75000 session 0x555b28ea1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2b3ab800 session 0x555b2966d500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b27b75000 session 0x555b29b55880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f89a7000/0x0/0x4ffc00000, data 0x21b4362/0x233e000, compress 0x0/0x0/0x0, omap 0x426ca, meta 0x4ecd936), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 61112320 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b26999400 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701cc00 session 0x555b28117dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701d800 session 0x555b2a437180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 61104128 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b281fd400 session 0x555b281176c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f9198000/0x0/0x4ffc00000, data 0x19ce1dc/0x1b53000, compress 0x0/0x0/0x0, omap 0x42a7b, meta 0x4ecd585), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 ms_handle_reset con 0x555b2701cc00 session 0x555b29d98c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 271 ms_handle_reset con 0x555b26999400 session 0x555b27d0fc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 61104128 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 271 ms_handle_reset con 0x555b27b75000 session 0x555b29890c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 271 ms_handle_reset con 0x555b26aa9000 session 0x555b27d188c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.774101257s of 10.217343330s, submitted: 249
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 271 ms_handle_reset con 0x555b26aa9000 session 0x555b27d15340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b2b3ab800 session 0x555b2757ec40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 134995968 unmapped: 61071360 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b26999400 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b2701cc00 session 0x555b2a437c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b27b75000 session 0x555b27d0e8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b26999400 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b26aa9000 session 0x555b28e341c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 61054976 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058560 data_alloc: 218103808 data_used: 4736569
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 ms_handle_reset con 0x555b2b3ab800 session 0x555b29891c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b2701cc00 session 0x555b2966d880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b281fd400 session 0x555b28ea0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 61046784 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b281fd400 session 0x555b29b55c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 61046784 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b26999400 session 0x555b298c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b26aa9000 session 0x555b267468c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f9195000/0x0/0x4ffc00000, data 0x19d353c/0x1b57000, compress 0x0/0x0/0x0, omap 0x4398f, meta 0x4ecc671), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2060742 data_alloc: 218103808 data_used: 4737210
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b2701cc00 session 0x555b27d0f880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.912864685s of 10.011121750s, submitted: 75
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 ms_handle_reset con 0x555b2b3ab800 session 0x555b26fd2700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 61038592 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f9194000/0x0/0x4ffc00000, data 0x19d359e/0x1b58000, compress 0x0/0x0/0x0, omap 0x43aa7, meta 0x4ecc559), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b26999400 session 0x555b298c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 61030400 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2067694 data_alloc: 218103808 data_used: 4737210
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 61030400 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 61030400 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f918e000/0x0/0x4ffc00000, data 0x19d502d/0x1b5c000, compress 0x0/0x0/0x0, omap 0x43cff, meta 0x4ecc301), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 61030400 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 61030400 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b26aa9000 session 0x555b28e341c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b281fd400 session 0x555b28ea1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135045120 unmapped: 61022208 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f918e000/0x0/0x4ffc00000, data 0x19d502d/0x1b5c000, compress 0x0/0x0/0x0, omap 0x43cff, meta 0x4ecc301), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2066830 data_alloc: 218103808 data_used: 4737210
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b27592000 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b26999400 session 0x555b29773dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135045120 unmapped: 61022208 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135045120 unmapped: 61022208 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b27592000 session 0x555b29890fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 ms_handle_reset con 0x555b281fd400 session 0x555b296996c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 61014016 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 275 ms_handle_reset con 0x555b27592c00 session 0x555b298908c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.526011467s of 10.566509247s, submitted: 32
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b27592c00 session 0x555b29699180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135069696 unmapped: 60997632 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b26aa9000 session 0x555b273b4000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135069696 unmapped: 60997632 heap: 196067328 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f9186000/0x0/0x4ffc00000, data 0x19d8764/0x1b62000, compress 0x0/0x0/0x0, omap 0x44418, meta 0x4ecbbe8), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2074670 data_alloc: 218103808 data_used: 4737225
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 27402240 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 69320704 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 69255168 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b2701cc00 session 0x555b2a41b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 69246976 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135340032 unmapped: 69124096 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489778 data_alloc: 218103808 data_used: 4737225
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 heartbeat osd_stat(store_statfs(0x4ea98a000/0x0/0x4ffc00000, data 0x101d8764/0x10362000, compress 0x0/0x0/0x0, omap 0x444a4, meta 0x4ecbb5c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 64856064 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 heartbeat osd_stat(store_statfs(0x4e798a000/0x0/0x4ffc00000, data 0x131d8764/0x13362000, compress 0x0/0x0/0x0, omap 0x444a4, meta 0x4ecbb5c), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 51986432 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b26999400 session 0x555b293bba40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 heartbeat osd_stat(store_statfs(0x4e2d8a000/0x0/0x4ffc00000, data 0x17dd8764/0x17f62000, compress 0x0/0x0/0x0, omap 0x444a4, meta 0x4ecbb5c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 68673536 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b281fd400 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b26999400 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 ms_handle_reset con 0x555b26aa9000 session 0x555b27d0e700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b2701cc00 session 0x555b296c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b27592c00 session 0x555b27d0efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b29a06400 session 0x555b29b31c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b29a06400 session 0x555b296c0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b26999400 session 0x555b298c16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b27592000 session 0x555b29bc8000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b26aa9000 session 0x555b29698000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 67887104 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b2701cc00 session 0x555b2a437180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.071174145s of 10.828002930s, submitted: 146
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b26999400 session 0x555b28117dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b26aa9000 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 67870720 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 heartbeat osd_stat(store_statfs(0x4e0107000/0x0/0x4ffc00000, data 0x1aa57362/0x1abe3000, compress 0x0/0x0/0x0, omap 0x44b47, meta 0x4ecb4b9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b2701cc00 session 0x555b28ea01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b27592000 session 0x555b26b4c700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b29a06400 session 0x555b293bb880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4243169 data_alloc: 218103808 data_used: 4737225
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136773632 unmapped: 67690496 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 64126976 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 ms_handle_reset con 0x555b27592000 session 0x555b28e1c540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 278 ms_handle_reset con 0x555b27592c00 session 0x555b2966d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 64102400 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b27260c00 session 0x555b2757ee00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b2ac2b000 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b29a06400 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b27260c00 session 0x555b2a436fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b27592000 session 0x555b28ea1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b2701cc00 session 0x555b28e1d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b27592c00 session 0x555b27d15500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b2ac2b000 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b26999400 session 0x555b281b08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b26aa9000 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 63782912 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b2701cc00 session 0x555b28ea01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 68050944 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 heartbeat osd_stat(store_statfs(0x4e07f7000/0x0/0x4ffc00000, data 0x1a363046/0x1a4f5000, compress 0x0/0x0/0x0, omap 0x45c5d, meta 0x4eca3a3), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4222628 data_alloc: 218103808 data_used: 4737241
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 ms_handle_reset con 0x555b27592000 session 0x555b29d981c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 68042752 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 280 ms_handle_reset con 0x555b27592c00 session 0x555b27d0e700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 280 ms_handle_reset con 0x555b26999400 session 0x555b267468c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 68026368 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b27dab800 session 0x555b28ea0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b27592c00 session 0x555b2757f340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b27260c00 session 0x555b29699180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b297eb400 session 0x555b29bc9340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68009984 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b26999400 session 0x555b29891880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 ms_handle_reset con 0x555b297ea000 session 0x555b281cc540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 282 ms_handle_reset con 0x555b27260c00 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 67985408 heap: 204464128 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.487119675s of 10.094729424s, submitted: 153
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 80265216 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 283 heartbeat osd_stat(store_statfs(0x4dc7e5000/0x0/0x4ffc00000, data 0x1e369f7a/0x1e501000, compress 0x0/0x0/0x0, omap 0x468f7, meta 0x4ec9709), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 283 ms_handle_reset con 0x555b27dab800 session 0x555b28ea16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4802444 data_alloc: 218103808 data_used: 4737842
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 137592832 unmapped: 83664896 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 283 handle_osd_map epochs [283,284], i have 284, src has [1,284]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 78913536 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 284 ms_handle_reset con 0x555b2701c000 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 139640832 unmapped: 81616896 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 284 heartbeat osd_stat(store_statfs(0x4d07e6000/0x0/0x4ffc00000, data 0x2a36bb32/0x2a504000, compress 0x0/0x0/0x0, omap 0x46cbd, meta 0x4ec9343), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 148652032 unmapped: 72605696 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 285 ms_handle_reset con 0x555b27dabc00 session 0x555b27d0f180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 285 ms_handle_reset con 0x555b27261400 session 0x555b26fd3500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 285 ms_handle_reset con 0x555b26999400 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 80920576 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 286 ms_handle_reset con 0x555b2701c000 session 0x555b2a436540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6036635 data_alloc: 218103808 data_used: 6319783
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 80896000 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 286 ms_handle_reset con 0x555b27dab800 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 286 ms_handle_reset con 0x555b26999400 session 0x555b29772a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 80863232 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 287 ms_handle_reset con 0x555b27260c00 session 0x555b29699c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 287 ms_handle_reset con 0x555b2701c000 session 0x555b2a436a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 80822272 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cbbe0000/0x0/0x4ffc00000, data 0x2ef709d6/0x2f10a000, compress 0x0/0x0/0x0, omap 0x47947, meta 0x4ec86b9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 80822272 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 80814080 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.010012627s of 10.459098816s, submitted: 176
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 ms_handle_reset con 0x555b27261400 session 0x555b28117c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 ms_handle_reset con 0x555b27dabc00 session 0x555b2a436540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6068021 data_alloc: 218103808 data_used: 6484220
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 77660160 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 ms_handle_reset con 0x555b26999400 session 0x555b267468c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 77643776 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 ms_handle_reset con 0x555b2701c000 session 0x555b29891880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 heartbeat osd_stat(store_statfs(0x4cb6fb000/0x0/0x4ffc00000, data 0x2f45448d/0x2f5ef000, compress 0x0/0x0/0x0, omap 0x47fb5, meta 0x4ec804b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 77643776 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b27260c00 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 143613952 unmapped: 77643776 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b27261400 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b297ea000 session 0x555b28ea1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b26999400 session 0x555b29d99a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b2701c000 session 0x555b298c0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 ms_handle_reset con 0x555b27260c00 session 0x555b28ea0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 78929920 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6152335 data_alloc: 218103808 data_used: 6529276
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 ms_handle_reset con 0x555b27261400 session 0x555b26fd3500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142172160 unmapped: 79085568 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 ms_handle_reset con 0x555b2701d000 session 0x555b28e1ca80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 ms_handle_reset con 0x555b26999400 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 heartbeat osd_stat(store_statfs(0x4cabbd000/0x0/0x4ffc00000, data 0x2ff8fc89/0x3012d000, compress 0x0/0x0/0x0, omap 0x4876c, meta 0x4ec7894), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 ms_handle_reset con 0x555b27260c00 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142180352 unmapped: 79077376 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142180352 unmapped: 79077376 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 291 ms_handle_reset con 0x555b29bf5c00 session 0x555b293bac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 291 heartbeat osd_stat(store_statfs(0x4cabb8000/0x0/0x4ffc00000, data 0x2ff91841/0x30130000, compress 0x0/0x0/0x0, omap 0x48e49, meta 0x4ec71b7), peers [0,2] op hist [0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 ms_handle_reset con 0x555b27261400 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 ms_handle_reset con 0x555b27299c00 session 0x555b2a437500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 ms_handle_reset con 0x555b2701c000 session 0x555b26fd28c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 79060992 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 79060992 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6170747 data_alloc: 218103808 data_used: 6529276
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142204928 unmapped: 79052800 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142204928 unmapped: 79052800 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 heartbeat osd_stat(store_statfs(0x4cab15000/0x0/0x4ffc00000, data 0x30034a38/0x301d5000, compress 0x0/0x0/0x0, omap 0x4900c, meta 0x4ec6ff4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142204928 unmapped: 79052800 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 292 handle_osd_map epochs [292,293], i have 293, src has [1,293]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.149250984s of 13.502456665s, submitted: 139
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 79036416 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 293 ms_handle_reset con 0x555b26999400 session 0x555b28ea1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 79036416 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6173392 data_alloc: 218103808 data_used: 6529290
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 79036416 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 293 heartbeat osd_stat(store_statfs(0x4cab12000/0x0/0x4ffc00000, data 0x300364b7/0x301d8000, compress 0x0/0x0/0x0, omap 0x49430, meta 0x4ec6bd0), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 294 ms_handle_reset con 0x555b27260c00 session 0x555b26fd3880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142237696 unmapped: 79020032 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142237696 unmapped: 79020032 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 70K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 5951 syncs, 2.96 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 23.30 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4483 syncs, 2.32 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 294 ms_handle_reset con 0x555b27261400 session 0x555b2757e1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b27299c00 session 0x555b28e44700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b26999400 session 0x555b28e1d6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b2701c000 session 0x555b29bc8000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 78970880 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b27260c00 session 0x555b2966ca80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b27299c00 session 0x555b281cd6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b27261400 session 0x555b281b1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b29bf5c00 session 0x555b27d0ea80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b2701c000 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 ms_handle_reset con 0x555b27260c00 session 0x555b29890e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 296 ms_handle_reset con 0x555b26999400 session 0x555b28ea16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 66830336 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6375710 data_alloc: 234881024 data_used: 11452698
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27261400 session 0x555b29890e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 heartbeat osd_stat(store_statfs(0x4c9252000/0x0/0x4ffc00000, data 0x318ec46b/0x31a96000, compress 0x0/0x0/0x0, omap 0x4abec, meta 0x4ec5414), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 155516928 unmapped: 65740800 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b26999400 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2701c000 session 0x555b28e1d500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27260c00 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27261400 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 155516928 unmapped: 65740800 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2720ac00 session 0x555b27d18000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b26999400 session 0x555b27d0ea80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2701c000 session 0x555b29773180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: mgrc ms_handle_reset ms_handle_reset con 0x555b2720a000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/496403208
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/496403208,v1:192.168.122.100:6801/496403208]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: mgrc handle_mgr_configure stats_period=5
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27260c00 session 0x555b28ea1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27299c00 session 0x555b28e44fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27261400 session 0x555b28e1d6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 64987136 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b26aa9800 session 0x555b27d181c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b26aa8c00 session 0x555b27d14000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 156213248 unmapped: 65044480 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 158097408 unmapped: 63160320 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6444137 data_alloc: 234881024 data_used: 21232410
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 160595968 unmapped: 60661760 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 heartbeat osd_stat(store_statfs(0x4c9226000/0x0/0x4ffc00000, data 0x319164f0/0x31ac2000, compress 0x0/0x0/0x0, omap 0x4ac78, meta 0x4ec5388), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 160595968 unmapped: 60661760 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27299c00 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.670517921s of 14.290847778s, submitted: 170
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2ac2ac00 session 0x555b281cc540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2ac2a000 session 0x555b28e35180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b29acdc00 session 0x555b26747880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b2b3a9400 session 0x555b26fd2e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 68222976 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 ms_handle_reset con 0x555b27299c00 session 0x555b2757f500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 68214784 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 68214784 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6431389 data_alloc: 234881024 data_used: 21236506
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 68214784 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29acdc00 session 0x555b2a429a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 152911872 unmapped: 68345856 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2a000 session 0x555b2a4456c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 heartbeat osd_stat(store_statfs(0x4c9225000/0x0/0x4ffc00000, data 0x31917f6f/0x31ac5000, compress 0x0/0x0/0x0, omap 0x4b274, meta 0x4ec4d8c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2ac00 session 0x555b26746a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29adb000 session 0x555b27d18e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 68214784 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 68214784 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 57196544 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6625850 data_alloc: 251658240 data_used: 30151962
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 heartbeat osd_stat(store_statfs(0x4c9226000/0x0/0x4ffc00000, data 0x31917f92/0x31ac6000, compress 0x0/0x0/0x0, omap 0x4b274, meta 0x4ec4d8c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169074688 unmapped: 52183040 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29acdc00 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2a000 session 0x555b26fd2fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2ac00 session 0x555b296c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2fb62800 session 0x555b2966cc40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29d28400 session 0x555b281cd6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 49872896 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29acdc00 session 0x555b298c1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29d28400 session 0x555b2a4361c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2a000 session 0x555b26fd21c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2ac00 session 0x555b29698c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 51036160 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 heartbeat osd_stat(store_statfs(0x4c7354000/0x0/0x4ffc00000, data 0x33b53014/0x33998000, compress 0x0/0x0/0x0, omap 0x4bdf3, meta 0x4ec420d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.660578728s of 11.082254410s, submitted: 201
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2fb62800 session 0x555b2966c000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 51036160 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 51036160 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29acdc00 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6731418 data_alloc: 251658240 data_used: 30639386
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 51036160 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29d28400 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2a000 session 0x555b28ea16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2ac00 session 0x555b2a4361c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 heartbeat osd_stat(store_statfs(0x4c7354000/0x0/0x4ffc00000, data 0x33b53014/0x33998000, compress 0x0/0x0/0x0, omap 0x4bdf3, meta 0x4ec420d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 50847744 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 50585600 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b26999400 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2701c000 session 0x555b2a436fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b27260c00 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 185475072 unmapped: 35782656 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b29acdc00 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181469184 unmapped: 39788544 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6731586 data_alloc: 251658240 data_used: 37882056
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 39559168 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 heartbeat osd_stat(store_statfs(0x4c79c5000/0x0/0x4ffc00000, data 0x334e2f8f/0x33326000, compress 0x0/0x0/0x0, omap 0x4c2a0, meta 0x4ec3d60), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 ms_handle_reset con 0x555b2ac2a000 session 0x555b293baa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 39559168 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 299 ms_handle_reset con 0x555b2ac2ac00 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 300 ms_handle_reset con 0x555b29a00800 session 0x555b2757e1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186884096 unmapped: 34373632 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 300 ms_handle_reset con 0x555b29d28400 session 0x555b29699c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 34365440 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.102519035s of 10.377662659s, submitted: 127
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 33947648 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 300 handle_osd_map epochs [300,301], i have 301, src has [1,301]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 301 ms_handle_reset con 0x555b2701c000 session 0x555b29d99880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6815552 data_alloc: 251658240 data_used: 37890798
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 33955840 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 301 heartbeat osd_stat(store_statfs(0x4c6faf000/0x0/0x4ffc00000, data 0x33f7d2b7/0x33d3b000, compress 0x0/0x0/0x0, omap 0x4d11b, meta 0x4ec2ee5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 33955840 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 191922176 unmapped: 29335552 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 190464000 unmapped: 30793728 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 301 heartbeat osd_stat(store_statfs(0x4c52c1000/0x0/0x4ffc00000, data 0x34ac52b7/0x34883000, compress 0x0/0x0/0x0, omap 0x4d11b, meta 0x6062ee5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 301 heartbeat osd_stat(store_statfs(0x4c52c1000/0x0/0x4ffc00000, data 0x34ac52b7/0x34883000, compress 0x0/0x0/0x0, omap 0x4d11b, meta 0x6062ee5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 190488576 unmapped: 30769152 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 ms_handle_reset con 0x555b27260c00 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 ms_handle_reset con 0x555b29acdc00 session 0x555b29891880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6827838 data_alloc: 251658240 data_used: 39303918
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 heartbeat osd_stat(store_statfs(0x4c5d51000/0x0/0x4ffc00000, data 0x33fa9868/0x33df1000, compress 0x0/0x0/0x0, omap 0x4d986, meta 0x606267a), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 190603264 unmapped: 30654464 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 ms_handle_reset con 0x555b2701c000 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 heartbeat osd_stat(store_statfs(0x4c5d51000/0x0/0x4ffc00000, data 0x33fa9868/0x33df1000, compress 0x0/0x0/0x0, omap 0x4da12, meta 0x60625ee), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 190603264 unmapped: 30654464 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 30490624 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 303 ms_handle_reset con 0x555b27260c00 session 0x555b2a428fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 29417472 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 ms_handle_reset con 0x555b29d28400 session 0x555b298c01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 ms_handle_reset con 0x555b29a00800 session 0x555b281b0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 29417472 heap: 221257728 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.224382401s of 10.888598442s, submitted: 205
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7121117 data_alloc: 251658240 data_used: 39312110
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201752576 unmapped: 57294848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 heartbeat osd_stat(store_statfs(0x4c2d53000/0x0/0x4ffc00000, data 0x36facf7f/0x36df9000, compress 0x0/0x0/0x0, omap 0x4e01b, meta 0x6061fe5), peers [0,2] op hist [0,2,0,0,0,0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 194232320 unmapped: 64815104 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 heartbeat osd_stat(store_statfs(0x4be153000/0x0/0x4ffc00000, data 0x3bbacf7f/0x3b9f9000, compress 0x0/0x0/0x0, omap 0x4e01b, meta 0x6061fe5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 ms_handle_reset con 0x555b281fc800 session 0x555b281b1dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 heartbeat osd_stat(store_statfs(0x4be153000/0x0/0x4ffc00000, data 0x3bbacf7f/0x3b9f9000, compress 0x0/0x0/0x0, omap 0x4e01b, meta 0x6061fe5), peers [0,2] op hist [0,0,0,0,2,1,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196124672 unmapped: 62922752 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196452352 unmapped: 62595072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200695808 unmapped: 58351616 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29ada000 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29a00800 session 0x555b2966c1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b2ac2a000 session 0x555b2966cfc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29d28400 session 0x555b293bae00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8557467 data_alloc: 251658240 data_used: 41032544
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b2bba7c00 session 0x555b296c0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200802304 unmapped: 58245120 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 heartbeat osd_stat(store_statfs(0x4b1d4f000/0x0/0x4ffc00000, data 0x47faeee8/0x47dfd000, compress 0x0/0x0/0x0, omap 0x4e53c, meta 0x6061ac4), peers [0,2] op hist [0,0,1,2])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29a00800 session 0x555b28e45180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29ada000 session 0x555b28117dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 58949632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 heartbeat osd_stat(store_statfs(0x4c314f000/0x0/0x4ffc00000, data 0x33fae99c/0x33dfb000, compress 0x0/0x0/0x0, omap 0x4e53c, meta 0x6061ac4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b29d28400 session 0x555b27d0ea80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b27260c00 session 0x555b293bbdc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 58949632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 ms_handle_reset con 0x555b2ac2a000 session 0x555b27d141c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 ms_handle_reset con 0x555b2c170c00 session 0x555b298c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 ms_handle_reset con 0x555b298f5400 session 0x555b281cd340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200097792 unmapped: 58949632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 ms_handle_reset con 0x555b27260c00 session 0x555b2a4376c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186720256 unmapped: 72327168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.272665024s of 10.154292107s, submitted: 519
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 ms_handle_reset con 0x555b29ada000 session 0x555b28ea0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6702746 data_alloc: 234881024 data_used: 26436236
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 ms_handle_reset con 0x555b29d28400 session 0x555b28e45340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 heartbeat osd_stat(store_statfs(0x4c7336000/0x0/0x4ffc00000, data 0x3277b4b8/0x325c6000, compress 0x0/0x0/0x0, omap 0x4ea5b, meta 0x60615a5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186769408 unmapped: 72278016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 heartbeat osd_stat(store_statfs(0x4c7134000/0x0/0x4ffc00000, data 0x3297c6b8/0x327c8000, compress 0x0/0x0/0x0, omap 0x4eae7, meta 0x6061519), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 187236352 unmapped: 71811072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 307 ms_handle_reset con 0x555b29a00800 session 0x555b28e1cfc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b29d28400 session 0x555b27d19340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 71794688 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b27260c00 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b298f5400 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b29ada000 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 185909248 unmapped: 73138176 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 heartbeat osd_stat(store_statfs(0x4c90dd000/0x0/0x4ffc00000, data 0x308b3ea8/0x30a6d000, compress 0x0/0x0/0x0, omap 0x4f7eb, meta 0x6060815), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b27299c00 session 0x555b28e1d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 ms_handle_reset con 0x555b298f5400 session 0x555b2757efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 73089024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6482378 data_alloc: 234881024 data_used: 24863735
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 310 ms_handle_reset con 0x555b27260c00 session 0x555b2966c1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 185917440 unmapped: 73129984 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 310 ms_handle_reset con 0x555b29a00800 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 74514432 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 310 ms_handle_reset con 0x555b29ada000 session 0x555b28e44000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 310 ms_handle_reset con 0x555b27260c00 session 0x555b27d0e700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 311 ms_handle_reset con 0x555b27299c00 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 184549376 unmapped: 74498048 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 311 ms_handle_reset con 0x555b298f5400 session 0x555b2a429c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 75218944 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f6ae4000/0x0/0x4ffc00000, data 0x2eb0034/0x3068000, compress 0x0/0x0/0x0, omap 0x509c2, meta 0x605f63e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 313 ms_handle_reset con 0x555b29a00800 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 313 ms_handle_reset con 0x555b29d28400 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 83533824 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 313 handle_osd_map epochs [313,314], i have 314, src has [1,314]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.433931351s of 10.281209946s, submitted: 553
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 ms_handle_reset con 0x555b27260c00 session 0x555b281cc700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2659186 data_alloc: 234881024 data_used: 11477963
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f790a000/0x0/0x4ffc00000, data 0x20812a5/0x223b000, compress 0x0/0x0/0x0, omap 0x51563, meta 0x605ea9d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 83525632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 175464448 unmapped: 83582976 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 ms_handle_reset con 0x555b26aa9000 session 0x555b28116e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 ms_handle_reset con 0x555b2701cc00 session 0x555b27d14fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f790a000/0x0/0x4ffc00000, data 0x20812a5/0x223b000, compress 0x0/0x0/0x0, omap 0x51563, meta 0x605ea9d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167378944 unmapped: 91668480 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 ms_handle_reset con 0x555b27299c00 session 0x555b2757f180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167387136 unmapped: 91660288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7f78000/0x0/0x4ffc00000, data 0x1a1a2a5/0x1bd4000, compress 0x0/0x0/0x0, omap 0x51b6f, meta 0x605e491), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 314 handle_osd_map epochs [315,315], i have 315, src has [1,315]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167387136 unmapped: 91660288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2595640 data_alloc: 218103808 data_used: 4768617
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167387136 unmapped: 91660288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f7f73000/0x0/0x4ffc00000, data 0x1a1bd6c/0x1bd7000, compress 0x0/0x0/0x0, omap 0x51fef, meta 0x605e011), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167387136 unmapped: 91660288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 315 ms_handle_reset con 0x555b298f5400 session 0x555b281b1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 315 ms_handle_reset con 0x555b26aa9000 session 0x555b28117500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 91652096 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 91627520 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b2701cc00 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 91627520 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2606009 data_alloc: 218103808 data_used: 4768875
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f7f69000/0x0/0x4ffc00000, data 0x1a1f7a2/0x1bde000, compress 0x0/0x0/0x0, omap 0x5277c, meta 0x605d884), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 91627520 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b27260c00 session 0x555b27d15dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.253048897s of 11.496800423s, submitted: 122
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b29a00800 session 0x555b29d98a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b27299c00 session 0x555b2757e1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 91643904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b26aa9000 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b2701cc00 session 0x555b2a437c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 91643904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f7f6d000/0x0/0x4ffc00000, data 0x1a1f7b2/0x1bdf000, compress 0x0/0x0/0x0, omap 0x52806, meta 0x605d7fa), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 ms_handle_reset con 0x555b27260c00 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167411712 unmapped: 91635712 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f7f6e000/0x0/0x4ffc00000, data 0x1a1f7a2/0x1bde000, compress 0x0/0x0/0x0, omap 0x52890, meta 0x605d770), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 318 ms_handle_reset con 0x555b29a00800 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167510016 unmapped: 91537408 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 ms_handle_reset con 0x555b2c170c00 session 0x555b29bc8000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2612970 data_alloc: 218103808 data_used: 4773774
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167526400 unmapped: 91521024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167526400 unmapped: 91521024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167526400 unmapped: 91521024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f7f65000/0x0/0x4ffc00000, data 0x1a22b3a/0x1be2000, compress 0x0/0x0/0x0, omap 0x52ea6, meta 0x605d15a), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 ms_handle_reset con 0x555b26aa9000 session 0x555b2757f340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 ms_handle_reset con 0x555b27260c00 session 0x555b298c08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 ms_handle_reset con 0x555b2701cc00 session 0x555b29699a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 ms_handle_reset con 0x555b29a00800 session 0x555b2757f180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167493632 unmapped: 91553792 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b2ac2a000 session 0x555b29891a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167534592 unmapped: 91512832 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2618067 data_alloc: 218103808 data_used: 4777835
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b26aa9000 session 0x555b26747180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167534592 unmapped: 91512832 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b2701cc00 session 0x555b29773dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b27260c00 session 0x555b28ea0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b29a00800 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b2bba7c00 session 0x555b28ea0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b26aa9000 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 heartbeat osd_stat(store_statfs(0x4f7f61000/0x0/0x4ffc00000, data 0x1a24784/0x1be9000, compress 0x0/0x0/0x0, omap 0x53e35, meta 0x605c1cb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.816434860s of 10.031011581s, submitted: 91
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b2701cc00 session 0x555b281cd340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167559168 unmapped: 91488256 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b29a00800 session 0x555b29d99a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b27260c00 session 0x555b28e1cfc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 ms_handle_reset con 0x555b2c33bc00 session 0x555b2a436fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168632320 unmapped: 90415104 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b26aa9000 session 0x555b29890700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b2701cc00 session 0x555b267468c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b27260c00 session 0x555b29d98540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167550976 unmapped: 91496448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167550976 unmapped: 91496448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b29a00800 session 0x555b281cc000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b29bfdc00 session 0x555b281176c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630902 data_alloc: 218103808 data_used: 4778455
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b26aa9000 session 0x555b28e35c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b27260c00 session 0x555b2a428380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b2701cc00 session 0x555b2a429500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167583744 unmapped: 91463680 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b29a00800 session 0x555b293bbc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b29a04c00 session 0x555b2a437340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b26aa9000 session 0x555b26746380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 heartbeat osd_stat(store_statfs(0x4f7f62000/0x0/0x4ffc00000, data 0x1a262e6/0x1bea000, compress 0x0/0x0/0x0, omap 0x554d2, meta 0x605ab2e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167583744 unmapped: 91463680 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 ms_handle_reset con 0x555b27260c00 session 0x555b27d15a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b2701cc00 session 0x555b2a429a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b29a00800 session 0x555b2a41b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167600128 unmapped: 91447296 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167608320 unmapped: 91439104 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b27299800 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b29adc000 session 0x555b26fd2000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167641088 unmapped: 91406336 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b26aa9000 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7f5e000/0x0/0x4ffc00000, data 0x1a27e92/0x1bee000, compress 0x0/0x0/0x0, omap 0x558aa, meta 0x605a756), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b2701cc00 session 0x555b281cc540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2636621 data_alloc: 218103808 data_used: 4779084
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 91398144 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b27260c00 session 0x555b298916c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 91398144 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.094756126s of 10.290254593s, submitted: 137
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 ms_handle_reset con 0x555b29d2c000 session 0x555b28e35dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b29a00800 session 0x555b28e35c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167673856 unmapped: 91373568 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b26aa9000 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b2701cc00 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b29d2c000 session 0x555b28e34e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b27260c00 session 0x555b28e44fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167731200 unmapped: 91316224 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b29adc000 session 0x555b27d15a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b2701cc00 session 0x555b26fd2700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b29a00800 session 0x555b28ea01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 ms_handle_reset con 0x555b29d2c000 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f7f5a000/0x0/0x4ffc00000, data 0x1a29ae4/0x1bf2000, compress 0x0/0x0/0x0, omap 0x56f05, meta 0x60590fb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b26aa9000 session 0x555b28e45500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b2accc400 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 91283456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b2701cc00 session 0x555b2a428000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2650694 data_alloc: 218103808 data_used: 4779653
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f7f54000/0x0/0x4ffc00000, data 0x1a2b6f0/0x1bf5000, compress 0x0/0x0/0x0, omap 0x57506, meta 0x6058afa), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167780352 unmapped: 91267072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b29a00800 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b29adc000 session 0x555b298c1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b29d2c000 session 0x555b297728c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b2701cc00 session 0x555b28ea1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 91258880 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b29a00800 session 0x555b281b0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 ms_handle_reset con 0x555b29adc000 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167460864 unmapped: 91586560 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b29d2c000 session 0x555b2a4288c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167460864 unmapped: 91586560 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b2accc400 session 0x555b293bb880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f7f58000/0x0/0x4ffc00000, data 0x1a2b6e0/0x1bf4000, compress 0x0/0x0/0x0, omap 0x57a17, meta 0x60585e9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167460864 unmapped: 91586560 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b2701cc00 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648794 data_alloc: 218103808 data_used: 4780266
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b29a00800 session 0x555b2757efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167460864 unmapped: 91586560 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b29adc000 session 0x555b2a429880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b27299000 session 0x555b298c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 91578368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b29d2c000 session 0x555b27d14540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.491972923s of 10.828331947s, submitted: 183
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 ms_handle_reset con 0x555b2701cc00 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167772160 unmapped: 91275264 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f7f55000/0x0/0x4ffc00000, data 0x1a2d17b/0x1bf7000, compress 0x0/0x0/0x0, omap 0x58387, meta 0x6057c79), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 91242496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 326 ms_handle_reset con 0x555b2accd800 session 0x555b28ea0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 91250688 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 327 ms_handle_reset con 0x555b29002000 session 0x555b29772000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2666679 data_alloc: 218103808 data_used: 4780652
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 91250688 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 328 ms_handle_reset con 0x555b2b3ad400 session 0x555b28ea0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 328 ms_handle_reset con 0x555b29adc000 session 0x555b27d15dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 328 ms_handle_reset con 0x555b2701cc00 session 0x555b2a428c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 91283456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167780352 unmapped: 91267072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 ms_handle_reset con 0x555b2accd800 session 0x555b28e34fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 ms_handle_reset con 0x555b2b3ad400 session 0x555b2948ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f7f24000/0x0/0x4ffc00000, data 0x1a563f6/0x1c26000, compress 0x0/0x0/0x0, omap 0x58b81, meta 0x605747f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 ms_handle_reset con 0x555b27299000 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 ms_handle_reset con 0x555b29a00800 session 0x555b2757e1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 91242496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 ms_handle_reset con 0x555b29a00800 session 0x555b28117a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 ms_handle_reset con 0x555b2701cc00 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 ms_handle_reset con 0x555b29002000 session 0x555b293ba1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 ms_handle_reset con 0x555b2accd800 session 0x555b2a4288c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 ms_handle_reset con 0x555b27299000 session 0x555b2948a8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167821312 unmapped: 91226112 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2679030 data_alloc: 218103808 data_used: 4781120
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167821312 unmapped: 91226112 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 ms_handle_reset con 0x555b2701cc00 session 0x555b2757ec40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b29002000 session 0x555b298916c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b29a00800 session 0x555b27d14000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b2accd800 session 0x555b2966cc40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b2b3ad400 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f7f3f000/0x0/0x4ffc00000, data 0x1a360e6/0x1c0b000, compress 0x0/0x0/0x0, omap 0x59721, meta 0x60568df), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 89096192 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b29002000 session 0x555b297728c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b2701cc00 session 0x555b2a4361c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818831444s of 10.051674843s, submitted: 112
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 ms_handle_reset con 0x555b2accd800 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 89047040 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f7f39000/0x0/0x4ffc00000, data 0x1a37999/0x1c0e000, compress 0x0/0x0/0x0, omap 0x59bdf, meta 0x6056421), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 332 ms_handle_reset con 0x555b27b74c00 session 0x555b2a428700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 89038848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b2accf800 session 0x555b28ea1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b29a00800 session 0x555b28ea16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b29d2d400 session 0x555b28e45880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b2accf800 session 0x555b298c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170033152 unmapped: 89014272 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b2701cc00 session 0x555b27d0fc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b27b74c00 session 0x555b298c1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b2701cc00 session 0x555b28e44700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2697192 data_alloc: 218103808 data_used: 4781234
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170033152 unmapped: 89014272 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b2accf800 session 0x555b29d988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 ms_handle_reset con 0x555b29002000 session 0x555b28ea0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b29d2d400 session 0x555b2590dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b2accd800 session 0x555b2a436c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b29a00800 session 0x555b27d141c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b2701cc00 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170065920 unmapped: 88981504 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b29002000 session 0x555b281cd880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b29d2d400 session 0x555b29773180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 heartbeat osd_stat(store_statfs(0x4f7f2f000/0x0/0x4ffc00000, data 0x1a3d12c/0x1c1b000, compress 0x0/0x0/0x0, omap 0x5b183, meta 0x6054e7d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 88940544 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 ms_handle_reset con 0x555b2accd800 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 ms_handle_reset con 0x555b2701cc00 session 0x555b29d98e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 ms_handle_reset con 0x555b29a00800 session 0x555b281cc1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 ms_handle_reset con 0x555b2accf800 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 88924160 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 ms_handle_reset con 0x555b29bfe800 session 0x555b28e44fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 335 handle_osd_map epochs [335,336], i have 336, src has [1,336]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b29bfe000 session 0x555b29d98fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b29d2d400 session 0x555b298c1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b2701cc00 session 0x555b27d18fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b29002000 session 0x555b281b0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 88604672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2807943 data_alloc: 218103808 data_used: 4782925
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f6e3e000/0x0/0x4ffc00000, data 0x2b2b73b/0x2d0a000, compress 0x0/0x0/0x0, omap 0x5c07e, meta 0x6053f82), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 88604672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b29bfe800 session 0x555b29772c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b2accf800 session 0x555b26fd3340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 ms_handle_reset con 0x555b29002000 session 0x555b293ba700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 337 ms_handle_reset con 0x555b2cfb1800 session 0x555b296c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 337 ms_handle_reset con 0x555b28190c00 session 0x555b28ea0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170311680 unmapped: 88735744 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 338 ms_handle_reset con 0x555b29bfe800 session 0x555b2966c000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 338 ms_handle_reset con 0x555b29d2d400 session 0x555b28e1c700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 338 ms_handle_reset con 0x555b29a00800 session 0x555b296c1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 338 ms_handle_reset con 0x555b2701cc00 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 88711168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.458157539s of 10.310492516s, submitted: 237
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 ms_handle_reset con 0x555b29d2d400 session 0x555b29699c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f6e37000/0x0/0x4ffc00000, data 0x2b2f090/0x2d11000, compress 0x0/0x0/0x0, omap 0x5cc6b, meta 0x6053395), peers [0,2] op hist [1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 ms_handle_reset con 0x555b28190c00 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 170360832 unmapped: 88686592 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f6e37000/0x0/0x4ffc00000, data 0x2b2f090/0x2d11000, compress 0x0/0x0/0x0, omap 0x5cc6b, meta 0x6053395), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 ms_handle_reset con 0x555b29002000 session 0x555b29b316c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 ms_handle_reset con 0x555b2701cc00 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169549824 unmapped: 89497600 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2818244 data_alloc: 218103808 data_used: 4783016
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169549824 unmapped: 89497600 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b28190c00 session 0x555b281b0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b29a00800 session 0x555b26fd2e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b29bfe800 session 0x555b298908c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b29d2d400 session 0x555b29b55880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b2cfb1800 session 0x555b298c1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f6e38000/0x0/0x4ffc00000, data 0x2b30b47/0x2d14000, compress 0x0/0x0/0x0, omap 0x5d19c, meta 0x6052e64), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169410560 unmapped: 89636864 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b28190c00 session 0x555b26fd3180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 ms_handle_reset con 0x555b2701cc00 session 0x555b29698000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169410560 unmapped: 89636864 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 341 ms_handle_reset con 0x555b29a00800 session 0x555b27d14380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f6e33000/0x0/0x4ffc00000, data 0x2b327d5/0x2d19000, compress 0x0/0x0/0x0, omap 0x5db99, meta 0x6052467), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169418752 unmapped: 89628672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 ms_handle_reset con 0x555b29bfe800 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f6e29000/0x0/0x4ffc00000, data 0x2b35e60/0x2d1f000, compress 0x0/0x0/0x0, omap 0x5e2a7, meta 0x6051d59), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169418752 unmapped: 89628672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2831358 data_alloc: 218103808 data_used: 4783386
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 ms_handle_reset con 0x555b28190c00 session 0x555b28e356c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 ms_handle_reset con 0x555b2701cc00 session 0x555b2a429a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 ms_handle_reset con 0x555b2cfb1800 session 0x555b2a444380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169426944 unmapped: 89620480 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 ms_handle_reset con 0x555b2acd1c00 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 ms_handle_reset con 0x555b2bba7000 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 ms_handle_reset con 0x555b2bba7000 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 ms_handle_reset con 0x555b2701cc00 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 ms_handle_reset con 0x555b28190c00 session 0x555b26746a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 89612288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b27b82800 session 0x555b29699dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b2acd1c00 session 0x555b29b30c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b29a00800 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b27b82800 session 0x555b2948b500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169459712 unmapped: 89587712 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 ms_handle_reset con 0x555b2cfb1800 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.877623558s of 10.041072845s, submitted: 119
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 345 ms_handle_reset con 0x555b29bfb400 session 0x555b2966dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 89169920 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 ms_handle_reset con 0x555b2701cc00 session 0x555b296c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 ms_handle_reset con 0x555b27dab400 session 0x555b2a428000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 ms_handle_reset con 0x555b29bfb400 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 ms_handle_reset con 0x555b29a00800 session 0x555b296996c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 82255872 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2953967 data_alloc: 234881024 data_used: 21387521
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f6df5000/0x0/0x4ffc00000, data 0x2b61207/0x2d53000, compress 0x0/0x0/0x0, omap 0x5f171, meta 0x6050e8f), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b2cfb1800 session 0x555b2a41b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b2cfb1800 session 0x555b2a436a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b27b82800 session 0x555b29772540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176848896 unmapped: 82198528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f6df5000/0x0/0x4ffc00000, data 0x2b61207/0x2d53000, compress 0x0/0x0/0x0, omap 0x5f171, meta 0x6050e8f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b2701cc00 session 0x555b28ea0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b29a00800 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b28190c00 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 82182144 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 ms_handle_reset con 0x555b2bba7000 session 0x555b297736c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 ms_handle_reset con 0x555b29bfb400 session 0x555b28e34540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 ms_handle_reset con 0x555b2701cc00 session 0x555b29891180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 ms_handle_reset con 0x555b27dab400 session 0x555b28e44540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f6df6000/0x0/0x4ffc00000, data 0x2b62e07/0x2d56000, compress 0x0/0x0/0x0, omap 0x5f6c5, meta 0x605093b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 ms_handle_reset con 0x555b2cfb1800 session 0x555b29d99180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176979968 unmapped: 82067456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b29a00800 session 0x555b298c0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b27b82800 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b2701cc00 session 0x555b273b5340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176979968 unmapped: 82067456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b29bfb400 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b27dab400 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 ms_handle_reset con 0x555b2bba7000 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177045504 unmapped: 82001920 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 349 handle_osd_map epochs [349,350], i have 350, src has [1,350]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 ms_handle_reset con 0x555b2701cc00 session 0x555b2a429880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 ms_handle_reset con 0x555b29bf4000 session 0x555b2948b500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962745 data_alloc: 234881024 data_used: 21385359
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 ms_handle_reset con 0x555b29a00800 session 0x555b28e1d500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 ms_handle_reset con 0x555b27b82800 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177111040 unmapped: 81936384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 350 handle_osd_map epochs [351,351], i have 351, src has [1,351]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 351 ms_handle_reset con 0x555b29bfb400 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 351 ms_handle_reset con 0x555b29a00800 session 0x555b28ea0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 82026496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 351 ms_handle_reset con 0x555b29bf4000 session 0x555b29699c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 352 ms_handle_reset con 0x555b27b82800 session 0x555b2a29cc40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 352 ms_handle_reset con 0x555b2701cc00 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 352 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 90677248 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.490391731s of 10.144715309s, submitted: 274
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 353 ms_handle_reset con 0x555b29a00800 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 353 ms_handle_reset con 0x555b27b82800 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f7efa000/0x0/0x4ffc00000, data 0x1a5c464/0x1c50000, compress 0x0/0x0/0x0, omap 0x61893, meta 0x604e76d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 90677248 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 ms_handle_reset con 0x555b29bfb400 session 0x555b28e45500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 ms_handle_reset con 0x555b29bf4000 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 ms_handle_reset con 0x555b2701cc00 session 0x555b297736c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168378368 unmapped: 90669056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f7ef2000/0x0/0x4ffc00000, data 0x1a5fb15/0x1c54000, compress 0x0/0x0/0x0, omap 0x61f7f, meta 0x604e081), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 ms_handle_reset con 0x555b29a00800 session 0x555b29dec700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780099 data_alloc: 218103808 data_used: 4784802
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 354 ms_handle_reset con 0x555b29bfb400 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 355 ms_handle_reset con 0x555b27b82800 session 0x555b281b0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 355 ms_handle_reset con 0x555b2bba7000 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169451520 unmapped: 89595904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 355 ms_handle_reset con 0x555b27b82800 session 0x555b27d19340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 356 ms_handle_reset con 0x555b29a00800 session 0x555b296c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 356 ms_handle_reset con 0x555b29bf5000 session 0x555b27d14380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 356 ms_handle_reset con 0x555b2701cc00 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 90759168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 357 ms_handle_reset con 0x555b29bfb400 session 0x555b29bc9dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 357 ms_handle_reset con 0x555b2701cc00 session 0x555b27d18700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 90718208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f7eee000/0x0/0x4ffc00000, data 0x1a64ec3/0x1c58000, compress 0x0/0x0/0x0, omap 0x62818, meta 0x604d7e8), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 ms_handle_reset con 0x555b27b82800 session 0x555b281b1dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 90718208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 ms_handle_reset con 0x555b29a00800 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 ms_handle_reset con 0x555b29bf5000 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 ms_handle_reset con 0x555b29bfb400 session 0x555b2966dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 ms_handle_reset con 0x555b2701cc00 session 0x555b2a437c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168009728 unmapped: 91037696 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825981 data_alloc: 218103808 data_used: 4786429
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168001536 unmapped: 91045888 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 ms_handle_reset con 0x555b27b82800 session 0x555b2966d880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f7b9c000/0x0/0x4ffc00000, data 0x1db36da/0x1fac000, compress 0x0/0x0/0x0, omap 0x638ad, meta 0x604c753), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 91127808 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 ms_handle_reset con 0x555b29a00800 session 0x555b29772fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 ms_handle_reset con 0x555b29bf5000 session 0x555b2966c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167952384 unmapped: 91095040 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f7ba0000/0x0/0x4ffc00000, data 0x1db36da/0x1fac000, compress 0x0/0x0/0x0, omap 0x639c5, meta 0x604c63b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.371664047s of 10.225804329s, submitted: 213
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b29a04800 session 0x555b29d99dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b2701cc00 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167960576 unmapped: 91086848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167960576 unmapped: 91086848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833482 data_alloc: 218103808 data_used: 4787542
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b27b82800 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167960576 unmapped: 91086848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b29bf5000 session 0x555b298c08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b29a00800 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167960576 unmapped: 91086848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167960576 unmapped: 91086848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 ms_handle_reset con 0x555b2a9bc800 session 0x555b26fd3180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167993344 unmapped: 91054080 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f7b9c000/0x0/0x4ffc00000, data 0x1db524f/0x1fb0000, compress 0x0/0x0/0x0, omap 0x64311, meta 0x604bcef), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 360 handle_osd_map epochs [360,361], i have 361, src has [1,361]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 361 ms_handle_reset con 0x555b2701cc00 session 0x555b28e456c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167993344 unmapped: 91054080 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2838033 data_alloc: 218103808 data_used: 4787874
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 362 ms_handle_reset con 0x555b27b82800 session 0x555b28ea0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 362 ms_handle_reset con 0x555b29a04000 session 0x555b28e35c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 91021312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 91021312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f7b94000/0x0/0x4ffc00000, data 0x1db89bf/0x1fb6000, compress 0x0/0x0/0x0, omap 0x64541, meta 0x604babf), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 91021312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.211619377s of 10.441501617s, submitted: 61
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 91021312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 363 ms_handle_reset con 0x555b29a00800 session 0x555b296c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 363 ms_handle_reset con 0x555b2b3a9400 session 0x555b2966d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 91021312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 364 ms_handle_reset con 0x555b2b3a9400 session 0x555b298c1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2853431 data_alloc: 218103808 data_used: 4788557
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 364 ms_handle_reset con 0x555b2701cc00 session 0x555b28e34000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 90988544 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 365 ms_handle_reset con 0x555b29a00800 session 0x555b2a437340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 90972160 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 365 ms_handle_reset con 0x555b29bf5000 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167813120 unmapped: 91234304 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 365 ms_handle_reset con 0x555b27b75400 session 0x555b2a437500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f7b88000/0x0/0x4ffc00000, data 0x1dbdbf8/0x1fc2000, compress 0x0/0x0/0x0, omap 0x65252, meta 0x604adae), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 366 ms_handle_reset con 0x555b29a04000 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 91078656 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 366 ms_handle_reset con 0x555b29bf7c00 session 0x555b28e1d180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 366 ms_handle_reset con 0x555b2701cc00 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 91070464 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2880383 data_alloc: 218103808 data_used: 7862966
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 367 ms_handle_reset con 0x555b27b75400 session 0x555b29b31180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169033728 unmapped: 90013696 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 367 ms_handle_reset con 0x555b29bf5000 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169074688 unmapped: 89972736 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 368 ms_handle_reset con 0x555b27b75400 session 0x555b296c0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 368 ms_handle_reset con 0x555b2701cc00 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 368 ms_handle_reset con 0x555b29a04000 session 0x555b293bb880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 368 handle_osd_map epochs [369,369], i have 369, src has [1,369]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169099264 unmapped: 89948160 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 369 ms_handle_reset con 0x555b2b3a9400 session 0x555b29d99dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 369 ms_handle_reset con 0x555b29a00800 session 0x555b29b31880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.772825718s of 10.046725273s, submitted: 127
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169107456 unmapped: 89939968 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 369 ms_handle_reset con 0x555b29bf7c00 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x1dc4da5/0x1fcf000, compress 0x0/0x0/0x0, omap 0x667c3, meta 0x604983d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 89931776 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 370 ms_handle_reset con 0x555b27b75400 session 0x555b29890000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 370 ms_handle_reset con 0x555b29a00800 session 0x555b2948b500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 370 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889572 data_alloc: 218103808 data_used: 7863904
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 89931776 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 370 ms_handle_reset con 0x555b29a04000 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177627136 unmapped: 81420288 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 81371136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 371 ms_handle_reset con 0x555b2701cc00 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 82190336 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6f7a000/0x0/0x4ffc00000, data 0x29be507/0x2bc8000, compress 0x0/0x0/0x0, omap 0x66b89, meta 0x6049477), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 371 ms_handle_reset con 0x555b29a00800 session 0x555b27d15500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 372 ms_handle_reset con 0x555b29bf7c00 session 0x555b29891c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 176873472 unmapped: 82173952 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 373 ms_handle_reset con 0x555b2b3a9400 session 0x555b298c08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993596 data_alloc: 218103808 data_used: 9179833
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 373 ms_handle_reset con 0x555b2b3aa000 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 373 ms_handle_reset con 0x555b27b75400 session 0x555b2966d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 82026496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 373 ms_handle_reset con 0x555b29a00800 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 82026496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 374 ms_handle_reset con 0x555b29bf7c00 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 374 ms_handle_reset con 0x555b2b3aa000 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 374 ms_handle_reset con 0x555b2b3a9400 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f6f76000/0x0/0x4ffc00000, data 0x29c3e73/0x2bd4000, compress 0x0/0x0/0x0, omap 0x67a87, meta 0x6048579), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 177160192 unmapped: 81887232 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 375 ms_handle_reset con 0x555b29bf9000 session 0x555b28117500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 375 ms_handle_reset con 0x555b27b75400 session 0x555b29772e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 375 ms_handle_reset con 0x555b2701cc00 session 0x555b29bc8000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 375 ms_handle_reset con 0x555b29a00800 session 0x555b27d0e700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178216960 unmapped: 80830464 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924740791s of 10.423274040s, submitted: 265
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 80543744 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003346 data_alloc: 218103808 data_used: 9180231
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 376 heartbeat osd_stat(store_statfs(0x4f6f4c000/0x0/0x4ffc00000, data 0x29e9bfd/0x2bfe000, compress 0x0/0x0/0x0, omap 0x684cd, meta 0x6047b33), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 376 ms_handle_reset con 0x555b29bf7c00 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 80535552 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 376 ms_handle_reset con 0x555b2701cc00 session 0x555b28e456c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 80535552 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 376 ms_handle_reset con 0x555b2b3aa000 session 0x555b29dece00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178176000 unmapped: 80871424 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 ms_handle_reset con 0x555b27b75400 session 0x555b298c1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 ms_handle_reset con 0x555b29a00800 session 0x555b29b31dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 ms_handle_reset con 0x555b29a06400 session 0x555b26fd3180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 ms_handle_reset con 0x555b29acc400 session 0x555b28e35340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 377 handle_osd_map epochs [377,378], i have 378, src has [1,378]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 378 ms_handle_reset con 0x555b2a9bc000 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 378 ms_handle_reset con 0x555b2701cc00 session 0x555b29b55500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f6f4b000/0x0/0x4ffc00000, data 0x29eb2b3/0x2bff000, compress 0x0/0x0/0x0, omap 0x690b9, meta 0x6046f47), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 378 ms_handle_reset con 0x555b29bf9000 session 0x555b2a4448c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 80674816 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026596 data_alloc: 218103808 data_used: 9180215
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 379 ms_handle_reset con 0x555b27b75400 session 0x555b28e34fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 379 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 379 ms_handle_reset con 0x555b29acc400 session 0x555b26746a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f6f3c000/0x0/0x4ffc00000, data 0x2baa679/0x2c0c000, compress 0x0/0x0/0x0, omap 0x69e8f, meta 0x6046171), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 ms_handle_reset con 0x555b29a00800 session 0x555b28e44000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f6f3c000/0x0/0x4ffc00000, data 0x2baa679/0x2c0c000, compress 0x0/0x0/0x0, omap 0x69e8f, meta 0x6046171), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f6f3c000/0x0/0x4ffc00000, data 0x2baa679/0x2c0c000, compress 0x0/0x0/0x0, omap 0x69e8f, meta 0x6046171), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.436594963s of 11.192692757s, submitted: 153
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 381 ms_handle_reset con 0x555b2a9bc000 session 0x555b28ea0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033319 data_alloc: 218103808 data_used: 9180117
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 381 ms_handle_reset con 0x555b29bf9000 session 0x555b27d15dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 381 ms_handle_reset con 0x555b2b3aa000 session 0x555b29772380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178348032 unmapped: 80699392 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 382 ms_handle_reset con 0x555b2701cc00 session 0x555b28e34540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 382 ms_handle_reset con 0x555b29a00800 session 0x555b2966d880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 383 ms_handle_reset con 0x555b2a9bc000 session 0x555b29772540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 80674816 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 384 ms_handle_reset con 0x555b2fb65000 session 0x555b28e35500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 384 ms_handle_reset con 0x555b29438000 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 384 ms_handle_reset con 0x555b2fb65000 session 0x555b296981c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 384 ms_handle_reset con 0x555b2c336000 session 0x555b26fd3340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 80519168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 385 ms_handle_reset con 0x555b2701cc00 session 0x555b2a437c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f6f2d000/0x0/0x4ffc00000, data 0x2bb16bf/0x2c19000, compress 0x0/0x0/0x0, omap 0x6abfd, meta 0x6045403), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 385 ms_handle_reset con 0x555b29acc400 session 0x555b2948a8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 80519168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046611 data_alloc: 218103808 data_used: 9180133
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f6f2d000/0x0/0x4ffc00000, data 0x2bb3293/0x2c1c000, compress 0x0/0x0/0x0, omap 0x6ad10, meta 0x60452f0), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 80510976 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 386 ms_handle_reset con 0x555b2701cc00 session 0x555b26fd3880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 386 ms_handle_reset con 0x555b29438000 session 0x555b27d14540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 386 ms_handle_reset con 0x555b2c336000 session 0x555b2966dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 80510976 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 80510976 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 387 ms_handle_reset con 0x555b2fb65000 session 0x555b29773500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 387 ms_handle_reset con 0x555b29a00800 session 0x555b281b1dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 387 ms_handle_reset con 0x555b2b3aa000 session 0x555b2a41ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178544640 unmapped: 80502784 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f6f27000/0x0/0x4ffc00000, data 0x2bb82fc/0x2c21000, compress 0x0/0x0/0x0, omap 0x6b894, meta 0x604476c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 80494592 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053217 data_alloc: 234881024 data_used: 9445011
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f6f27000/0x0/0x4ffc00000, data 0x2bb82fc/0x2c21000, compress 0x0/0x0/0x0, omap 0x6b894, meta 0x604476c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.236049652s of 10.707848549s, submitted: 176
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 389 ms_handle_reset con 0x555b2701cc00 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 389 ms_handle_reset con 0x555b29438000 session 0x555b2a436c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 80494592 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178569216 unmapped: 80478208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 80453632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f6f21000/0x0/0x4ffc00000, data 0x2bbba57/0x2c27000, compress 0x0/0x0/0x0, omap 0x6c10e, meta 0x6043ef2), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 80453632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f6f1f000/0x0/0x4ffc00000, data 0x2ecba57/0x2c2d000, compress 0x0/0x0/0x0, omap 0x6c3c0, meta 0x6043c40), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 80453632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089481 data_alloc: 234881024 data_used: 9445011
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 80453632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f6f1f000/0x0/0x4ffc00000, data 0x2ecba57/0x2c2d000, compress 0x0/0x0/0x0, omap 0x6c3c0, meta 0x6043c40), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 ms_handle_reset con 0x555b2c336000 session 0x555b28ea0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 80445440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 390 handle_osd_map epochs [390,391], i have 391, src has [1,391]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 80445440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 391 ms_handle_reset con 0x555b2fb62800 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 77316096 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f6f18000/0x0/0x4ffc00000, data 0x2ecd59c/0x2c32000, compress 0x0/0x0/0x0, omap 0x6c869, meta 0x6043797), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 392 ms_handle_reset con 0x555b2701cc00 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 77266944 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123833 data_alloc: 234881024 data_used: 15946915
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 393 ms_handle_reset con 0x555b29438000 session 0x555b297728c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.760065079s of 10.035781860s, submitted: 99
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 393 ms_handle_reset con 0x555b2b3aa000 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 393 ms_handle_reset con 0x555b2fb65000 session 0x555b2a436000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181829632 unmapped: 77217792 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 394 ms_handle_reset con 0x555b2c336000 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 394 ms_handle_reset con 0x555b2701cc00 session 0x555b281b0fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 77078528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181084160 unmapped: 77963264 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 395 handle_osd_map epochs [395,396], i have 396, src has [1,396]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 ms_handle_reset con 0x555b29438000 session 0x555b29bc9dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 ms_handle_reset con 0x555b2b3aa000 session 0x555b298c1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 77529088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f6f09000/0x0/0x4ffc00000, data 0x2ed5f4d/0x2c3f000, compress 0x0/0x0/0x0, omap 0x6dc8c, meta 0x6042374), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 77529088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134198 data_alloc: 234881024 data_used: 16344511
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 77529088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 77529088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 77529088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f6f09000/0x0/0x4ffc00000, data 0x2ed5f4d/0x2c3f000, compress 0x0/0x0/0x0, omap 0x6dc8c, meta 0x6042374), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181600256 unmapped: 77447168 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x2f1cf4d/0x2c59000, compress 0x0/0x0/0x0, omap 0x6d949, meta 0x60426b7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181616640 unmapped: 77430784 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152134 data_alloc: 234881024 data_used: 16242111
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181616640 unmapped: 77430784 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x2f1cf4d/0x2c59000, compress 0x0/0x0/0x0, omap 0x6d949, meta 0x60426b7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181649408 unmapped: 77398016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.745087624s of 11.886486053s, submitted: 96
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x2f1cf4d/0x2c59000, compress 0x0/0x0/0x0, omap 0x6d949, meta 0x60426b7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 77348864 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 397 ms_handle_reset con 0x555b2fb65000 session 0x555b273b5340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 77348864 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 77348864 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3154528 data_alloc: 234881024 data_used: 16242111
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 398 ms_handle_reset con 0x555b2fb62800 session 0x555b2a41ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 77332480 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 398 ms_handle_reset con 0x555b2701cc00 session 0x555b26fd2000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 398 ms_handle_reset con 0x555b29438000 session 0x555b296996c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2b3aa000 session 0x555b28ea16c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 77283328 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f6ee9000/0x0/0x4ffc00000, data 0x2f221a0/0x2c63000, compress 0x0/0x0/0x0, omap 0x6eae1, meta 0x604151f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2fb65000 session 0x555b28ea0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2b3a8c00 session 0x555b2a428c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2701dc00 session 0x555b2a437c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2701cc00 session 0x555b2a444fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 77242368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 77242368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b29438000 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2b3aa000 session 0x555b29b316c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 77242368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162912 data_alloc: 234881024 data_used: 16242111
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2fb65000 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f6ee9000/0x0/0x4ffc00000, data 0x2f221f2/0x2c63000, compress 0x0/0x0/0x0, omap 0x6f381, meta 0x6040c7f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 ms_handle_reset con 0x555b2701cc00 session 0x555b26747880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 400 ms_handle_reset con 0x555b2701dc00 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 77242368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f6ee4000/0x0/0x4ffc00000, data 0x2f23daa/0x2c66000, compress 0x0/0x0/0x0, omap 0x6f517, meta 0x6040ae9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 400 ms_handle_reset con 0x555b29438000 session 0x555b2a428000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 401 ms_handle_reset con 0x555b2b3aa000 session 0x555b29891c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 77242368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.702698708s of 10.004937172s, submitted: 119
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 77193216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b298f6000 session 0x555b29b31a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b2701cc00 session 0x555b29ded6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b2a9bc000 session 0x555b29772e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 77193216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b2701dc00 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b29438000 session 0x555b29b541c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181862400 unmapped: 77185024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175797 data_alloc: 234881024 data_used: 16250916
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b2b3aa000 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 ms_handle_reset con 0x555b2701cc00 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181862400 unmapped: 77185024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 402 handle_osd_map epochs [402,403], i have 403, src has [1,403]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 403 ms_handle_reset con 0x555b2701dc00 session 0x555b298916c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181870592 unmapped: 77176832 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 403 ms_handle_reset con 0x555b29438000 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f6edb000/0x0/0x4ffc00000, data 0x2f28f9b/0x2c6f000, compress 0x0/0x0/0x0, omap 0x702b5, meta 0x603fd4b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 403 ms_handle_reset con 0x555b2accac00 session 0x555b29698000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b2a9bc000 session 0x555b2948a1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 77299712 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b29d29400 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b2701cc00 session 0x555b29772c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b2701dc00 session 0x555b28e341c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180838400 unmapped: 78209024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180838400 unmapped: 78209024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6ef7000/0x0/0x4ffc00000, data 0x2a1fb49/0x2c51000, compress 0x0/0x0/0x0, omap 0x70814, meta 0x603f7ec), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136689 data_alloc: 234881024 data_used: 11253698
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b27b82800 session 0x555b298908c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b29438000 session 0x555b28e44c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180846592 unmapped: 78200832 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 ms_handle_reset con 0x555b2accac00 session 0x555b2a429a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6efa000/0x0/0x4ffc00000, data 0x2a1fbab/0x2c52000, compress 0x0/0x0/0x0, omap 0x70bbb, meta 0x603f445), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180846592 unmapped: 78200832 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.508086205s of 10.027208328s, submitted: 163
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 ms_handle_reset con 0x555b2701cc00 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180854784 unmapped: 78192640 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 ms_handle_reset con 0x555b2701dc00 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 ms_handle_reset con 0x555b29d29400 session 0x555b2757f500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 ms_handle_reset con 0x555b29a03800 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178094080 unmapped: 80953344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 405 handle_osd_map epochs [405,406], i have 406, src has [1,406]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b27b82800 session 0x555b2a4376c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f7b4d000/0x0/0x4ffc00000, data 0x1aba3ab/0x1cef000, compress 0x0/0x0/0x0, omap 0x714a9, meta 0x603eb57), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b29a03800 session 0x555b296c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178094080 unmapped: 80953344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2980950 data_alloc: 218103808 data_used: 4796452
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b2701dc00 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178094080 unmapped: 80953344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b29d29400 session 0x555b2a41ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b27b82800 session 0x555b296c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 ms_handle_reset con 0x555b2701cc00 session 0x555b298c1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f7e5e000/0x0/0x4ffc00000, data 0x1aba349/0x1cee000, compress 0x0/0x0/0x0, omap 0x715c1, meta 0x603ea3f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 80969728 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 407 ms_handle_reset con 0x555b2701dc00 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f7e59000/0x0/0x4ffc00000, data 0x1abbf65/0x1cf1000, compress 0x0/0x0/0x0, omap 0x717e1, meta 0x603e81f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 80969728 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 408 ms_handle_reset con 0x555b2accac00 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 408 ms_handle_reset con 0x555b2ac2bc00 session 0x555b26b4d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178012160 unmapped: 81035264 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 409 ms_handle_reset con 0x555b29a03800 session 0x555b281b0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 409 ms_handle_reset con 0x555b2ac2bc00 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178020352 unmapped: 81027072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 409 ms_handle_reset con 0x555b2701cc00 session 0x555b27d0f6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990361 data_alloc: 218103808 data_used: 4796967
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 409 ms_handle_reset con 0x555b27b82800 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178053120 unmapped: 80994304 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 410 ms_handle_reset con 0x555b2701dc00 session 0x555b2a445a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f7e51000/0x0/0x4ffc00000, data 0x1abf600/0x1cf8000, compress 0x0/0x0/0x0, omap 0x720ba, meta 0x603df46), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 410 ms_handle_reset con 0x555b2701cc00 session 0x555b28117500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 410 ms_handle_reset con 0x555b27b82800 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 80977920 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b29a03800 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894762039s of 10.242547989s, submitted: 155
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2ac2bc00 session 0x555b2a41b880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2accac00 session 0x555b2c3a4000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180166656 unmapped: 78880768 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b27b82800 session 0x555b293bb880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b29a03800 session 0x555b2a4448c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 78872576 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2ac2bc00 session 0x555b2a41bc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f7e4a000/0x0/0x4ffc00000, data 0x1ac2d31/0x1cfe000, compress 0x0/0x0/0x0, omap 0x72614, meta 0x603d9ec), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2bba6800 session 0x555b26746380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 ms_handle_reset con 0x555b2701cc00 session 0x555b26746380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180191232 unmapped: 78856192 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998251 data_alloc: 218103808 data_used: 4798437
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b27b82800 session 0x555b2a41ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 180191232 unmapped: 78856192 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b29a03800 session 0x555b2a4376c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b2ac2bc00 session 0x555b296996c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b29d2b000 session 0x555b2a428e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 183721984 unmapped: 75325440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b2701cc00 session 0x555b27d0efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b27b82800 session 0x555b29bc9a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 ms_handle_reset con 0x555b2b3abc00 session 0x555b293ba8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f7cec000/0x0/0x4ffc00000, data 0x1c21887/0x1e5d000, compress 0x0/0x0/0x0, omap 0x72de9, meta 0x603d217), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 412 handle_osd_map epochs [413,413], i have 413, src has [1,413]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179544064 unmapped: 79503360 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 413 ms_handle_reset con 0x555b29a03800 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 413 ms_handle_reset con 0x555b29d2b000 session 0x555b281b0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179544064 unmapped: 79503360 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f7093000/0x0/0x4ffc00000, data 0x287a477/0x2ab7000, compress 0x0/0x0/0x0, omap 0x72f7f, meta 0x603d081), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 413 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179544064 unmapped: 79503360 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3093779 data_alloc: 218103808 data_used: 4798737
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179470336 unmapped: 79577088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 414 ms_handle_reset con 0x555b27b82800 session 0x555b26fd2000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f7095000/0x0/0x4ffc00000, data 0x287a477/0x2ab7000, compress 0x0/0x0/0x0, omap 0x7404b, meta 0x603bfb5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179470336 unmapped: 79577088 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.930657387s of 10.153879166s, submitted: 73
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 415 ms_handle_reset con 0x555b29a03800 session 0x555b29dedc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 79560704 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 ms_handle_reset con 0x555b29d2b000 session 0x555b296988c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 ms_handle_reset con 0x555b2b3abc00 session 0x555b29b55880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 79560704 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 ms_handle_reset con 0x555b2701cc00 session 0x555b2a429a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f708a000/0x0/0x4ffc00000, data 0x287f6ba/0x2ac0000, compress 0x0/0x0/0x0, omap 0x74b87, meta 0x603b479), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 79536128 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3101224 data_alloc: 218103808 data_used: 4799350
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 ms_handle_reset con 0x555b27b82800 session 0x555b2a29d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 79536128 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 79536128 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 417 ms_handle_reset con 0x555b29a03800 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 417 ms_handle_reset con 0x555b29d2b000 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179519488 unmapped: 79527936 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 417 ms_handle_reset con 0x555b2ac2bc00 session 0x555b281b1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 417 ms_handle_reset con 0x555b27b82800 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179519488 unmapped: 79527936 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b29a03800 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b2701cc00 session 0x555b27d15180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b29d2b000 session 0x555b2c3a4380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 79511552 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f7084000/0x0/0x4ffc00000, data 0x2882e9a/0x2ac6000, compress 0x0/0x0/0x0, omap 0x75107, meta 0x603aef9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3108589 data_alloc: 218103808 data_used: 4799622
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f7084000/0x0/0x4ffc00000, data 0x2882e9a/0x2ac6000, compress 0x0/0x0/0x0, omap 0x75107, meta 0x603aef9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b29003400 session 0x555b28e1c700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b2701cc00 session 0x555b2a29da40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 181436416 unmapped: 77611008 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186474496 unmapped: 72572928 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 ms_handle_reset con 0x555b27b82800 session 0x555b2757efc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186474496 unmapped: 72572928 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.230613708s of 10.568974495s, submitted: 87
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 420 ms_handle_reset con 0x555b29a03800 session 0x555b297728c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186482688 unmapped: 72564736 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f707a000/0x0/0x4ffc00000, data 0x288655f/0x2ace000, compress 0x0/0x0/0x0, omap 0x7596c, meta 0x603a694), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 420 ms_handle_reset con 0x555b29d2b000 session 0x555b2966c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186515456 unmapped: 72531968 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3203266 data_alloc: 234881024 data_used: 18749359
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 421 ms_handle_reset con 0x555b2702c000 session 0x555b281b0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186564608 unmapped: 72482816 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 421 ms_handle_reset con 0x555b2701cc00 session 0x555b27d18fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 421 ms_handle_reset con 0x555b27b82800 session 0x555b28ea1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186564608 unmapped: 72482816 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186564608 unmapped: 72482816 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 422 ms_handle_reset con 0x555b29a03800 session 0x555b2a444c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186572800 unmapped: 72474624 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f7077000/0x0/0x4ffc00000, data 0x2889bda/0x2ad3000, compress 0x0/0x0/0x0, omap 0x75d26, meta 0x603a2da), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 186605568 unmapped: 72441856 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208848 data_alloc: 234881024 data_used: 18761647
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 423 ms_handle_reset con 0x555b29d2b000 session 0x555b26b4d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 64299008 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 195043328 unmapped: 64004096 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b29bfa000 session 0x555b296c1500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b2bba6c00 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196485120 unmapped: 62562304 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f63fd000/0x0/0x4ffc00000, data 0x3502221/0x374f000, compress 0x0/0x0/0x0, omap 0x76505, meta 0x6039afb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b2701cc00 session 0x555b29b31880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f63fd000/0x0/0x4ffc00000, data 0x3502221/0x374f000, compress 0x0/0x0/0x0, omap 0x76505, meta 0x6039afb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.107955933s of 10.806370735s, submitted: 211
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196517888 unmapped: 62529536 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b27b82800 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196517888 unmapped: 62529536 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301011 data_alloc: 234881024 data_used: 20989969
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b29a03800 session 0x555b29b30a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 ms_handle_reset con 0x555b29bf8800 session 0x555b28e44c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 424 handle_osd_map epochs [424,425], i have 425, src has [1,425]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 ms_handle_reset con 0x555b2701cc00 session 0x555b2a436c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196517888 unmapped: 62529536 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f63f8000/0x0/0x4ffc00000, data 0x3503e11/0x3752000, compress 0x0/0x0/0x0, omap 0x76611, meta 0x60399ef), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 ms_handle_reset con 0x555b29d2b000 session 0x555b28ea0540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196526080 unmapped: 62521344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 ms_handle_reset con 0x555b27b82800 session 0x555b28ea0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196526080 unmapped: 62521344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 ms_handle_reset con 0x555b29a03800 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 62513152 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f63f9000/0x0/0x4ffc00000, data 0x3506d9f/0x3753000, compress 0x0/0x0/0x0, omap 0x76729, meta 0x60398d7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 62513152 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301985 data_alloc: 234881024 data_used: 20990143
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 62513152 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f63f9000/0x0/0x4ffc00000, data 0x3506d9f/0x3753000, compress 0x0/0x0/0x0, omap 0x76729, meta 0x60398d7), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196542464 unmapped: 62504960 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2c33a400 session 0x555b2c3a41c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2cfb1c00 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196542464 unmapped: 62504960 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2701cc00 session 0x555b29772c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305287 data_alloc: 234881024 data_used: 20998335
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305287 data_alloc: 234881024 data_used: 20998335
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305287 data_alloc: 234881024 data_used: 20998335
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b27b82800 session 0x555b296c1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b29a03800 session 0x555b2a29da40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b29d2b000 session 0x555b2966c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.635032654s of 25.969835281s, submitted: 86
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2701cc00 session 0x555b29b30000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306320 data_alloc: 234881024 data_used: 20998335
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f3000/0x0/0x4ffc00000, data 0x350981e/0x3757000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196567040 unmapped: 62480384 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307472 data_alloc: 234881024 data_used: 21467327
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f4000/0x0/0x4ffc00000, data 0x3509841/0x3758000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f4000/0x0/0x4ffc00000, data 0x3509841/0x3758000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307472 data_alloc: 234881024 data_used: 21467327
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 196698112 unmapped: 62349312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.934085846s of 11.939582825s, submitted: 4
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 61997056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 61997056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f4000/0x0/0x4ffc00000, data 0x3509841/0x3758000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 61997056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f4000/0x0/0x4ffc00000, data 0x3509841/0x3758000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322432 data_alloc: 234881024 data_used: 23017663
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 61997056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63f4000/0x0/0x4ffc00000, data 0x3509841/0x3758000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197050368 unmapped: 61997056 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197115904 unmapped: 61931520 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b29a03800 session 0x555b281cdc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2cfb1c00 session 0x555b2c3a4380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x3515841/0x3764000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322756 data_alloc: 234881024 data_used: 23017663
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x3515841/0x3764000, compress 0x0/0x0/0x0, omap 0x76cdd, meta 0x6039323), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197132288 unmapped: 61915136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.400032043s of 12.426513672s, submitted: 27
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197140480 unmapped: 61906944 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323672 data_alloc: 234881024 data_used: 23017663
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197140480 unmapped: 61906944 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x3515841/0x3764000, compress 0x0/0x0/0x0, omap 0x771fa, meta 0x6038e06), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 ms_handle_reset con 0x555b2c33a400 session 0x555b293ba540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x3515841/0x3764000, compress 0x0/0x0/0x0, omap 0x771fa, meta 0x6038e06), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 61718528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 61718528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 ms_handle_reset con 0x555b2bba6c00 session 0x555b281cce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x3515841/0x3764000, compress 0x0/0x0/0x0, omap 0x771fa, meta 0x6038e06), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 61718528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 ms_handle_reset con 0x555b2bba6c00 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 ms_handle_reset con 0x555b29a03800 session 0x555b281b0e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 ms_handle_reset con 0x555b2701cc00 session 0x555b26fd2540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 61669376 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3343713 data_alloc: 234881024 data_used: 23976127
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197435392 unmapped: 61612032 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197435392 unmapped: 61612032 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197435392 unmapped: 61612032 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33a400 session 0x555b29772fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f639d000/0x0/0x4ffc00000, data 0x35c7fec/0x37ad000, compress 0x0/0x0/0x0, omap 0x778a2, meta 0x603875e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197435392 unmapped: 61612032 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197435392 unmapped: 61612032 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.412251472s of 10.472534180s, submitted: 15
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2cfb1c00 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f639d000/0x0/0x4ffc00000, data 0x35c7fec/0x37ad000, compress 0x0/0x0/0x0, omap 0x778a2, meta 0x603875e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3348176 data_alloc: 234881024 data_used: 23980239
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 61603840 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 61603840 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 61603840 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b281cc8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b2a445a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201654272 unmapped: 57393152 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2bba6c00 session 0x555b2a4281c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33a400 session 0x555b28e341c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 61546496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33b800 session 0x555b281cd6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f439f000/0x0/0x4ffc00000, data 0x55c7fec/0x57ad000, compress 0x0/0x0/0x0, omap 0x77e37, meta 0x60381c9), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519196 data_alloc: 234881024 data_used: 23980239
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33b800 session 0x555b29b55880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 61546496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 61546496 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b281cd880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b28ea0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197656576 unmapped: 61390848 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197705728 unmapped: 61341696 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2901c800 session 0x555b2a437340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a02000 session 0x555b28e1d180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b293bb500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2901c800 session 0x555b2a428380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a02000 session 0x555b2a429500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b27d18fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33b800 session 0x555b2a428000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b296c1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2901c800 session 0x555b28e35880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198213632 unmapped: 60833792 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f364f000/0x0/0x4ffc00000, data 0x6540ffb/0x64fd000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624246 data_alloc: 234881024 data_used: 24020687
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198213632 unmapped: 60833792 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 60825600 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 60825600 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a02000 session 0x555b2a41bdc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 60825600 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b29d99a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b27daac00 session 0x555b26fd3c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 60694528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b28e34e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2901c800 session 0x555b28e35dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f364f000/0x0/0x4ffc00000, data 0x6540ffb/0x64fd000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.630311966s of 15.403319359s, submitted: 61
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624378 data_alloc: 234881024 data_used: 24020687
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 60694528 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 198361088 unmapped: 60686336 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b26998c00 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29bf5800 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f364f000/0x0/0x4ffc00000, data 0x6540ffb/0x64fd000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205463552 unmapped: 53583872 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f364f000/0x0/0x4ffc00000, data 0x6540ffb/0x64fd000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 212058112 unmapped: 46989312 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 212303872 unmapped: 46743552 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3790990 data_alloc: 251658240 data_used: 40421071
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 219561984 unmapped: 39485440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 219627520 unmapped: 39419904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 219627520 unmapped: 39419904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f30c0000/0x0/0x4ffc00000, data 0x6acfffb/0x6a8c000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 39387136 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 216743936 unmapped: 42303488 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796238 data_alloc: 251658240 data_used: 42441423
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 216743936 unmapped: 42303488 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 216743936 unmapped: 42303488 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f30c0000/0x0/0x4ffc00000, data 0x6acfffb/0x6a8c000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x6038135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.099523544s of 12.142329216s, submitted: 10
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 222666752 unmapped: 36380672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 35758080 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 223838208 unmapped: 35209216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3895644 data_alloc: 268435456 data_used: 45202127
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 229924864 unmapped: 29122560 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f117b000/0x0/0x4ffc00000, data 0x7874ffb/0x7831000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x71d8135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2bba6c00 session 0x555b2c3a4380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2c33a400 session 0x555b29dedc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 230064128 unmapped: 28983296 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b2a444fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f0cf5000/0x0/0x4ffc00000, data 0x7cfaffb/0x7cb7000, compress 0x0/0x0/0x0, omap 0x77ecb, meta 0x71d8135), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228630528 unmapped: 30416896 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228630528 unmapped: 30416896 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f07d9000/0x0/0x4ffc00000, data 0x8217fec/0x81d3000, compress 0x0/0x0/0x0, omap 0x79787, meta 0x71d6879), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f07d9000/0x0/0x4ffc00000, data 0x8217fec/0x81d3000, compress 0x0/0x0/0x0, omap 0x79787, meta 0x71d6879), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228663296 unmapped: 30384128 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2901c800 session 0x555b2a29d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b29a03800 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3910781 data_alloc: 268435456 data_used: 45441743
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228687872 unmapped: 30359552 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 ms_handle_reset con 0x555b2701cc00 session 0x555b29772a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b2901c800 session 0x555b26fd3880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228753408 unmapped: 30294016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228761600 unmapped: 30285824 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.461488724s of 11.083440781s, submitted: 222
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b27b82800 session 0x555b2948ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228810752 unmapped: 30236672 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b29a03800 session 0x555b29890380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228843520 unmapped: 30203904 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888663 data_alloc: 251658240 data_used: 43999951
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f0d65000/0x0/0x4ffc00000, data 0x79f2bb9/0x7c46000, compress 0x0/0x0/0x0, omap 0x79e34, meta 0x71d61cc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228859904 unmapped: 30187520 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f0d65000/0x0/0x4ffc00000, data 0x79f2bb9/0x7c46000, compress 0x0/0x0/0x0, omap 0x79e34, meta 0x71d61cc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 228892672 unmapped: 30154752 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b26998c00 session 0x555b26b4d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 227975168 unmapped: 31072256 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b2701cc00 session 0x555b2a4361c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b2bba6c00 session 0x555b2948a8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 227975168 unmapped: 31072256 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f0d65000/0x0/0x4ffc00000, data 0x79f3bb9/0x7c47000, compress 0x0/0x0/0x0, omap 0x7a248, meta 0x71d5db8), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b27b82800 session 0x555b2a4361c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b2901c800 session 0x555b281b1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 43171840 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b29a02000 session 0x555b2757ee00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 ms_handle_reset con 0x555b2701cc00 session 0x555b27d14fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369998 data_alloc: 234881024 data_used: 12391596
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f279f000/0x0/0x4ffc00000, data 0x5fb9bb9/0x620d000, compress 0x0/0x0/0x0, omap 0x7a2d4, meta 0x71d5d2c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 55910400 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 55910400 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f4270000/0x0/0x4ffc00000, data 0x44e8bb9/0x473c000, compress 0x0/0x0/0x0, omap 0x7a535, meta 0x71d5acb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203145216 unmapped: 55902208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203145216 unmapped: 55902208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b27b82800 session 0x555b298c1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.437561989s of 10.595733643s, submitted: 101
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b2901c800 session 0x555b2757f500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b2bba6c00 session 0x555b29d98fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b29a03800 session 0x555b2948aa80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373180 data_alloc: 234881024 data_used: 12391580
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f426e000/0x0/0x4ffc00000, data 0x44ea5d5/0x473e000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x71d53b4), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f426e000/0x0/0x4ffc00000, data 0x44ea5d5/0x473e000, compress 0x0/0x0/0x0, omap 0x7ad64, meta 0x71d529c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373180 data_alloc: 234881024 data_used: 12391580
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.635991096s of 10.653241158s, submitted: 11
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b2701cc00 session 0x555b293bbc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b27b82800 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b2901c800 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373036 data_alloc: 234881024 data_used: 12391580
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 ms_handle_reset con 0x555b2c33a400 session 0x555b29dedc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f426e000/0x0/0x4ffc00000, data 0x44ea5d5/0x473e000, compress 0x0/0x0/0x0, omap 0x7ae32, meta 0x71d51ce), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2bba6c00 session 0x555b2a444380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2701cc00 session 0x555b28e34e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203153408 unmapped: 55894016 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b27b82800 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2901c800 session 0x555b29773c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2bba6c00 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2c33a400 session 0x555b29772540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2701cc00 session 0x555b29ded180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b27b82800 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2901c800 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b29adc800 session 0x555b27d18fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b29d29800 session 0x555b28ea1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203145216 unmapped: 55902208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384462 data_alloc: 234881024 data_used: 12395692
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2701cc00 session 0x555b293ba1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203145216 unmapped: 55902208 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b27b82800 session 0x555b29b31dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203169792 unmapped: 55877632 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b29adc800 session 0x555b29b31500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2accb400 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2901c800 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f4264000/0x0/0x4ffc00000, data 0x44ec227/0x4746000, compress 0x0/0x0/0x0, omap 0x7b60b, meta 0x71d49f5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2701cc00 session 0x555b2a445180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b27b82800 session 0x555b28ea0a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b29adc800 session 0x555b29772700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b2accb400 session 0x555b2a29d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 ms_handle_reset con 0x555b29bfe800 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384495 data_alloc: 234881024 data_used: 12399788
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.221728325s of 11.310848236s, submitted: 47
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 ms_handle_reset con 0x555b2701cc00 session 0x555b28e35500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 ms_handle_reset con 0x555b27b82800 session 0x555b293bb340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 ms_handle_reset con 0x555b29adc800 session 0x555b2a41b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f4263000/0x0/0x4ffc00000, data 0x44eddf7/0x4747000, compress 0x0/0x0/0x0, omap 0x7b8b2, meta 0x71d474e), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f4263000/0x0/0x4ffc00000, data 0x44eddf7/0x4747000, compress 0x0/0x0/0x0, omap 0x7b93e, meta 0x71d46c2), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387989 data_alloc: 234881024 data_used: 12399788
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203177984 unmapped: 55869440 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 ms_handle_reset con 0x555b2accb400 session 0x555b298916c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 55861248 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 433 ms_handle_reset con 0x555b29adb800 session 0x555b2a445dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 433 ms_handle_reset con 0x555b2701cc00 session 0x555b29773500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203309056 unmapped: 55738368 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 433 handle_osd_map epochs [433,434], i have 434, src has [1,434]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 ms_handle_reset con 0x555b29adc800 session 0x555b281cc1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 ms_handle_reset con 0x555b27b82800 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 ms_handle_reset con 0x555b2accb400 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 ms_handle_reset con 0x555b27299800 session 0x555b29772a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f3a5e000/0x0/0x4ffc00000, data 0x4cef84d/0x4f4b000, compress 0x0/0x0/0x0, omap 0x7bea1, meta 0x71d415f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203358208 unmapped: 55689216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508820 data_alloc: 234881024 data_used: 12923993
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203358208 unmapped: 55689216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.336119652s of 10.757556915s, submitted: 97
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 ms_handle_reset con 0x555b2701cc00 session 0x555b29b31dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203358208 unmapped: 55689216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f2e5b000/0x0/0x4ffc00000, data 0x58f1483/0x5b4f000, compress 0x0/0x0/0x0, omap 0x7c531, meta 0x71d3acf), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203448320 unmapped: 55599104 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b27b82800 session 0x555b2a29c8c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203456512 unmapped: 55590912 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b29adc800 session 0x555b26b4ca80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b2bba6c00 session 0x555b2590dc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b27b82400 session 0x555b296c1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 203456512 unmapped: 55590912 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b27b82400 session 0x555b293ba1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 ms_handle_reset con 0x555b2701cc00 session 0x555b26b4d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510459 data_alloc: 234881024 data_used: 12936265
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 54673408 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 ms_handle_reset con 0x555b27b82800 session 0x555b296c0700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205438976 unmapped: 53608448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 ms_handle_reset con 0x555b29adc800 session 0x555b28e356c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 ms_handle_reset con 0x555b2bba6c00 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f2e56000/0x0/0x4ffc00000, data 0x58f4bdc/0x5b53000, compress 0x0/0x0/0x0, omap 0x7cb49, meta 0x71d34b7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 ms_handle_reset con 0x555b2701cc00 session 0x555b29bc9dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205438976 unmapped: 53608448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 ms_handle_reset con 0x555b27b82400 session 0x555b29b31500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b27b82800 session 0x555b293bb500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b29adc800 session 0x555b2757f180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205438976 unmapped: 53608448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b2bba6c00 session 0x555b29699a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b2701cc00 session 0x555b296c1880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b27b82400 session 0x555b28e44700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b27b82800 session 0x555b29ded180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b29adc800 session 0x555b281cc540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b29acdc00 session 0x555b28e45a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f2e30000/0x0/0x4ffc00000, data 0x591a78d/0x5b7a000, compress 0x0/0x0/0x0, omap 0x7cd65, meta 0x71d329b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260576 data_alloc: 218103808 data_used: 5065311
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f57f7000/0x0/0x4ffc00000, data 0x2f5378d/0x31b3000, compress 0x0/0x0/0x0, omap 0x7cdf1, meta 0x71d320f), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.922050476s of 13.094200134s, submitted: 78
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b29acdc00 session 0x555b27d14a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3271184 data_alloc: 218103808 data_used: 6168159
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197214208 unmapped: 61833216 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 ms_handle_reset con 0x555b27b82400 session 0x555b29b31500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 438 ms_handle_reset con 0x555b27b82800 session 0x555b2966ddc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f57f1000/0x0/0x4ffc00000, data 0x2f55435/0x31b9000, compress 0x0/0x0/0x0, omap 0x7d2e3, meta 0x71d2d1d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 438 ms_handle_reset con 0x555b29bf4000 session 0x555b29d98fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197222400 unmapped: 61825024 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 438 ms_handle_reset con 0x555b29003800 session 0x555b2a436e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 439 ms_handle_reset con 0x555b29adc800 session 0x555b29dedc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 439 ms_handle_reset con 0x555b2701cc00 session 0x555b2948a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197238784 unmapped: 61808640 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197238784 unmapped: 61808640 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 439 ms_handle_reset con 0x555b29003800 session 0x555b26b4ca80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3283391 data_alloc: 218103808 data_used: 6168273
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197246976 unmapped: 61800448 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197566464 unmapped: 61480960 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 439 ms_handle_reset con 0x555b27b82800 session 0x555b29698c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 440 heartbeat osd_stat(store_statfs(0x4f57ee000/0x0/0x4ffc00000, data 0x2f57095/0x31be000, compress 0x0/0x0/0x0, omap 0x7dbcf, meta 0x71d2431), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 440 ms_handle_reset con 0x555b29acdc00 session 0x555b293ba700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 197550080 unmapped: 61497344 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 441 ms_handle_reset con 0x555b29bf4000 session 0x555b2757ee00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 441 ms_handle_reset con 0x555b27b82400 session 0x555b2a445dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199589888 unmapped: 59457536 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f2f76000/0x0/0x4ffc00000, data 0x462883f/0x4894000, compress 0x0/0x0/0x0, omap 0x7e509, meta 0x8371af7), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199589888 unmapped: 59457536 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.998496056s of 10.295365334s, submitted: 140
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 441 ms_handle_reset con 0x555b2701cc00 session 0x555b2757f340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 442 ms_handle_reset con 0x555b27b82800 session 0x555b273b5340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 442 ms_handle_reset con 0x555b29003800 session 0x555b2757e1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434725 data_alloc: 218103808 data_used: 6222643
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 59432960 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 442 ms_handle_reset con 0x555b29003800 session 0x555b29891a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199507968 unmapped: 59539456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199507968 unmapped: 59539456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199507968 unmapped: 59539456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 442 ms_handle_reset con 0x555b27b82400 session 0x555b281b0c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f2f75000/0x0/0x4ffc00000, data 0x462a3dd/0x4897000, compress 0x0/0x0/0x0, omap 0x7ef06, meta 0x83710fa), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 443 ms_handle_reset con 0x555b29bf4000 session 0x555b29d98a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 199507968 unmapped: 59539456 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 ms_handle_reset con 0x555b27b82800 session 0x555b28e356c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 ms_handle_reset con 0x555b2701cc00 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444999 data_alloc: 218103808 data_used: 6226869
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200556544 unmapped: 58490880 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 ms_handle_reset con 0x555b2accb400 session 0x555b26746700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 ms_handle_reset con 0x555b2901dc00 session 0x555b27d14380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 ms_handle_reset con 0x555b27b82400 session 0x555b298c0380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200548352 unmapped: 58499072 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f2f69000/0x0/0x4ffc00000, data 0x462dbbf/0x489f000, compress 0x0/0x0/0x0, omap 0x7f61b, meta 0x83709e5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 445 ms_handle_reset con 0x555b2701cc00 session 0x555b2966d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 445 ms_handle_reset con 0x555b27b82800 session 0x555b2a41b340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200482816 unmapped: 58564608 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200482816 unmapped: 58564608 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f2f8a000/0x0/0x4ffc00000, data 0x460d378/0x487e000, compress 0x0/0x0/0x0, omap 0x7fe69, meta 0x8370197), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 446 ms_handle_reset con 0x555b2701cc00 session 0x555b296c1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200491008 unmapped: 58556416 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.058508873s of 10.281455994s, submitted: 161
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438025 data_alloc: 218103808 data_used: 6122272
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 58548224 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f2f90000/0x0/0x4ffc00000, data 0x460d306/0x487c000, compress 0x0/0x0/0x0, omap 0x7fc2f, meta 0x83703d1), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 447 ms_handle_reset con 0x555b27b82400 session 0x555b29890fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 58548224 heap: 259047424 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 448 ms_handle_reset con 0x555b2901dc00 session 0x555b2a428000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 448 ms_handle_reset con 0x555b2accb400 session 0x555b281b1340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 448 ms_handle_reset con 0x555b29003800 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 449 ms_handle_reset con 0x555b2701cc00 session 0x555b29891180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 449 ms_handle_reset con 0x555b2901dc00 session 0x555b298c1c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 449 ms_handle_reset con 0x555b2accb400 session 0x555b28ea08c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200515584 unmapped: 62734336 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 450 ms_handle_reset con 0x555b27b82400 session 0x555b273b5a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200515584 unmapped: 62734336 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200515584 unmapped: 62734336 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 451 ms_handle_reset con 0x555b29bf4000 session 0x555b2a437a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 451 ms_handle_reset con 0x555b2701cc00 session 0x555b2c3a5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623974 data_alloc: 218103808 data_used: 6118561
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200548352 unmapped: 62701568 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200548352 unmapped: 62701568 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f0f42000/0x0/0x4ffc00000, data 0x6655c2d/0x68c6000, compress 0x0/0x0/0x0, omap 0x80ff5, meta 0x836f00b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200548352 unmapped: 62701568 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 452 heartbeat osd_stat(store_statfs(0x4f0f42000/0x0/0x4ffc00000, data 0x6655c2d/0x68c6000, compress 0x0/0x0/0x0, omap 0x80ff5, meta 0x836f00b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 62750720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 62750720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626412 data_alloc: 218103808 data_used: 6119146
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 62750720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 62750720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 62750720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.909623146s of 12.778359413s, submitted: 169
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 452 heartbeat osd_stat(store_statfs(0x4f0f41000/0x0/0x4ffc00000, data 0x66576f4/0x68c9000, compress 0x0/0x0/0x0, omap 0x8113b, meta 0x836eec5), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 452 handle_osd_map epochs [453,453], i have 453, src has [1,453]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 ms_handle_reset con 0x555b27b82400 session 0x555b293ba000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200810496 unmapped: 62439424 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 ms_handle_reset con 0x555b2901dc00 session 0x555b31739180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f0f3e000/0x0/0x4ffc00000, data 0x66591cb/0x68cc000, compress 0x0/0x0/0x0, omap 0x8192f, meta 0x836e6d1), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200810496 unmapped: 62439424 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634898 data_alloc: 218103808 data_used: 6119754
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200810496 unmapped: 62439424 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200810496 unmapped: 62439424 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200810496 unmapped: 62439424 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 ms_handle_reset con 0x555b27b74000 session 0x555b2a428c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 200941568 unmapped: 62308352 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f0ef8000/0x0/0x4ffc00000, data 0x66a11cb/0x6914000, compress 0x0/0x0/0x0, omap 0x81c31, meta 0x836e3cf), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 454 ms_handle_reset con 0x555b27593c00 session 0x555b2a444380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201097216 unmapped: 62152704 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655845 data_alloc: 218103808 data_used: 8686922
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 454 ms_handle_reset con 0x555b27593c00 session 0x555b29d98a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62144512 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 ms_handle_reset con 0x555b2701cc00 session 0x555b26b4c540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62136320 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 ms_handle_reset con 0x555b27b74000 session 0x555b29b316c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62144512 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903695107s of 10.007904053s, submitted: 67
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 ms_handle_reset con 0x555b27b82400 session 0x555b2948ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62128128 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 ms_handle_reset con 0x555b2901dc00 session 0x555b26fd3340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 heartbeat osd_stat(store_statfs(0x4f0ef1000/0x0/0x4ffc00000, data 0x66a4967/0x691b000, compress 0x0/0x0/0x0, omap 0x824c3, meta 0x836db3d), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62128128 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657862 data_alloc: 218103808 data_used: 8686922
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 455 handle_osd_map epochs [455,456], i have 456, src has [1,456]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 456 ms_handle_reset con 0x555b2701cc00 session 0x555b281cc700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62128128 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 456 ms_handle_reset con 0x555b27593c00 session 0x555b2966d880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 53420032 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 456 heartbeat osd_stat(store_statfs(0x4f0eec000/0x0/0x4ffc00000, data 0x66a651f/0x691e000, compress 0x0/0x0/0x0, omap 0x825c4, meta 0x836da3c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 456 ms_handle_reset con 0x555b27b74000 session 0x555b273b5880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205873152 unmapped: 57376768 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205496320 unmapped: 57753600 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 457 ms_handle_reset con 0x555b27b82400 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 457 ms_handle_reset con 0x555b2c337000 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205504512 unmapped: 57745408 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 457 ms_handle_reset con 0x555b2701cc00 session 0x555b2a41b180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3806536 data_alloc: 234881024 data_used: 11380143
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 57901056 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 458 ms_handle_reset con 0x555b27593c00 session 0x555b26747180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 57901056 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 459 ms_handle_reset con 0x555b27b74000 session 0x555b2966ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209879040 unmapped: 53370880 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.726412773s of 10.034411430s, submitted: 136
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 459 heartbeat osd_stat(store_statfs(0x4ef789000/0x0/0x4ffc00000, data 0x7e01826/0x807f000, compress 0x0/0x0/0x0, omap 0x82f7b, meta 0x836d085), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b27b82400 session 0x555b28e35500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209879040 unmapped: 53370880 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b26aa8800 session 0x555b2a29d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b26aa8800 session 0x555b2a29cfc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 heartbeat osd_stat(store_statfs(0x4ef78e000/0x0/0x4ffc00000, data 0x7e017c4/0x807e000, compress 0x0/0x0/0x0, omap 0x82f7b, meta 0x836d085), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 53362688 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b2701cc00 session 0x555b281b1180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3846233 data_alloc: 234881024 data_used: 15574447
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 53362688 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b2accb400 session 0x555b2a41ae00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210108416 unmapped: 53141504 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b29adc800 session 0x555b29ded880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 ms_handle_reset con 0x555b27b74000 session 0x555b31738000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 461 ms_handle_reset con 0x555b27593c00 session 0x555b29deda40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210116608 unmapped: 53133312 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 461 ms_handle_reset con 0x555b26aa8800 session 0x555b28ea0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 ms_handle_reset con 0x555b2701cc00 session 0x555b28e34e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210132992 unmapped: 53116928 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 ms_handle_reset con 0x555b29adc800 session 0x555b281b0000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 ms_handle_reset con 0x555b2accb400 session 0x555b2a29c700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 208846848 unmapped: 54403072 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 heartbeat osd_stat(store_statfs(0x4ef7ab000/0x0/0x4ffc00000, data 0x7de2aea/0x8061000, compress 0x0/0x0/0x0, omap 0x84276, meta 0x836bd8a), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843040 data_alloc: 234881024 data_used: 15470584
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 208846848 unmapped: 54403072 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 ms_handle_reset con 0x555b26aa8800 session 0x555b296c1a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 208846848 unmapped: 54403072 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209838080 unmapped: 53411840 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.779343605s of 10.289878845s, submitted: 108
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 463 handle_osd_map epochs [463,464], i have 464, src has [1,464]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 464 ms_handle_reset con 0x555b2701cc00 session 0x555b29891c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 53420032 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 464 ms_handle_reset con 0x555b27593c00 session 0x555b2a41b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 53420032 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 464 heartbeat osd_stat(store_statfs(0x4ef29a000/0x0/0x4ffc00000, data 0x82ed13d/0x856e000, compress 0x0/0x0/0x0, omap 0x84896, meta 0x836b76a), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 464 ms_handle_reset con 0x555b29adc800 session 0x555b273b5dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880556 data_alloc: 234881024 data_used: 16519773
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 ms_handle_reset con 0x555b27b82400 session 0x555b27d14000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 ms_handle_reset con 0x555b27b82400 session 0x555b281b01c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 heartbeat osd_stat(store_statfs(0x4ef299000/0x0/0x4ffc00000, data 0x82eed2d/0x8571000, compress 0x0/0x0/0x0, omap 0x84f85, meta 0x836b07b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 heartbeat osd_stat(store_statfs(0x4ef299000/0x0/0x4ffc00000, data 0x82eed2d/0x8571000, compress 0x0/0x0/0x0, omap 0x84f85, meta 0x836b07b), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 ms_handle_reset con 0x555b26aa8800 session 0x555b27d0f6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3882130 data_alloc: 234881024 data_used: 16520358
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 ms_handle_reset con 0x555b2701cc00 session 0x555b2a29d340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 ms_handle_reset con 0x555b27593c00 session 0x555b28e34e00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc800 session 0x555b29698000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701d800 session 0x555b29ded500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc400 session 0x555b2966cc40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc800 session 0x555b26fd3340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4ef295000/0x0/0x4ffc00000, data 0x82f07bb/0x8575000, compress 0x0/0x0/0x0, omap 0x85115, meta 0x836aeeb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4ef2b9000/0x0/0x4ffc00000, data 0x82cc7bb/0x8551000, compress 0x0/0x0/0x0, omap 0x84fc0, meta 0x836b040), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884978 data_alloc: 234881024 data_used: 17694374
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4ef2b9000/0x0/0x4ffc00000, data 0x82cc7bb/0x8551000, compress 0x0/0x0/0x0, omap 0x84fc0, meta 0x836b040), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b27593c00 session 0x555b29698c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.738009453s of 14.791184425s, submitted: 38
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209772544 unmapped: 53477376 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b27b82400 session 0x555b28e1ce00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701d800 session 0x555b27d18fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b27593c00 session 0x555b293bba40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804014 data_alloc: 234881024 data_used: 15579384
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209797120 unmapped: 53452800 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808782 data_alloc: 234881024 data_used: 15972600
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3809038 data_alloc: 234881024 data_used: 15980792
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 53387264 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.102342606s of 13.131219864s, submitted: 11
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3809614 data_alloc: 234881024 data_used: 15977720
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210092032 unmapped: 53157888 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b26aa8800 session 0x555b29bc9dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701cc00 session 0x555b27d0e540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc400 session 0x555b29b308c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210100224 unmapped: 53149696 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4effc3000/0x0/0x4ffc00000, data 0x75c57ab/0x7849000, compress 0x0/0x0/0x0, omap 0x85164, meta 0x836ae9c), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210100224 unmapped: 53149696 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3814141 data_alloc: 234881024 data_used: 17534200
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210100224 unmapped: 53149696 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210100224 unmapped: 53149696 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210100224 unmapped: 53149696 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b26aa8800 session 0x555b28116fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.324833870s of 12.352344513s, submitted: 12
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701cc00 session 0x555b293ba1c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2f5a000/0x0/0x4ffc00000, data 0x462f79c/0x48b2000, compress 0x0/0x0/0x0, omap 0x855fc, meta 0x836aa04), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2f5a000/0x0/0x4ffc00000, data 0x462f79c/0x48b2000, compress 0x0/0x0/0x0, omap 0x855fc, meta 0x836aa04), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3537117 data_alloc: 234881024 data_used: 11887784
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2f5a000/0x0/0x4ffc00000, data 0x462f79c/0x48b2000, compress 0x0/0x0/0x0, omap 0x855fc, meta 0x836aa04), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2f5a000/0x0/0x4ffc00000, data 0x462f79c/0x48b2000, compress 0x0/0x0/0x0, omap 0x855fc, meta 0x836aa04), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3537117 data_alloc: 234881024 data_used: 11887784
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2f5a000/0x0/0x4ffc00000, data 0x462f79c/0x48b2000, compress 0x0/0x0/0x0, omap 0x855fc, meta 0x836aa04), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701d800 session 0x555b2a445880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 55689216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b27593c00 session 0x555b2a444fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc400 session 0x555b2a29cfc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b26aa8800 session 0x555b2a41ae00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.864039421s of 11.891689301s, submitted: 17
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701cc00 session 0x555b29b30700
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701d800 session 0x555b26fd3180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b27593c00 session 0x555b2a428c40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b29adc400 session 0x555b2948ac40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b26aa8800 session 0x555b26747180
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578244 data_alloc: 234881024 data_used: 11887784
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2a57000/0x0/0x4ffc00000, data 0x4b3279c/0x4db5000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578376 data_alloc: 234881024 data_used: 11887784
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2a57000/0x0/0x4ffc00000, data 0x4b3279c/0x4db5000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2a57000/0x0/0x4ffc00000, data 0x4b3279c/0x4db5000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 55443456 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206430208 unmapped: 56819712 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206430208 unmapped: 56819712 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2a57000/0x0/0x4ffc00000, data 0x4b3279c/0x4db5000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206430208 unmapped: 56819712 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600264 data_alloc: 234881024 data_used: 15589544
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2a57000/0x0/0x4ffc00000, data 0x4b3279c/0x4db5000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600264 data_alloc: 234881024 data_used: 15589544
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 206512128 unmapped: 56737792 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.343406677s of 16.486043930s, submitted: 29
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209207296 unmapped: 54042624 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f248a000/0x0/0x4ffc00000, data 0x50f079c/0x5373000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210419712 unmapped: 52830208 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210419712 unmapped: 52830208 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2409000/0x0/0x4ffc00000, data 0x518079c/0x5403000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210419712 unmapped: 52830208 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641218 data_alloc: 234881024 data_used: 15709352
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 52748288 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 52748288 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 52748288 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 52609024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f2409000/0x0/0x4ffc00000, data 0x518079c/0x5403000, compress 0x0/0x0/0x0, omap 0x85534, meta 0x836aacc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 52609024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 ms_handle_reset con 0x555b2701d800 session 0x555b273b5a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643386 data_alloc: 234881024 data_used: 15709352
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f23e6000/0x0/0x4ffc00000, data 0x51a27ac/0x5426000, compress 0x0/0x0/0x0, omap 0x855c0, meta 0x836aa40), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f23e6000/0x0/0x4ffc00000, data 0x51a27ac/0x5426000, compress 0x0/0x0/0x0, omap 0x855c0, meta 0x836aa40), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.450386047s of 13.650005341s, submitted: 102
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643858 data_alloc: 234881024 data_used: 15717544
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 467 ms_handle_reset con 0x555b29adc800 session 0x555b2f617c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f23dc000/0x0/0x4ffc00000, data 0x51a9348/0x542e000, compress 0x0/0x0/0x0, omap 0x85c35, meta 0x836a3cb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 468 ms_handle_reset con 0x555b2fb63000 session 0x555b29b31500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f23dc000/0x0/0x4ffc00000, data 0x51a9348/0x542e000, compress 0x0/0x0/0x0, omap 0x85c35, meta 0x836a3cb), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.69 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 32.67 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4939 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650774 data_alloc: 234881024 data_used: 15717560
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 52600832 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2becb000 session 0x555b27d18a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210657280 unmapped: 52592640 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b27593c00 session 0x555b28e44fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210657280 unmapped: 52592640 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2701d800 session 0x555b29dedc00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b26aa8800 session 0x555b2a428380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210657280 unmapped: 52592640 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23ca000/0x0/0x4ffc00000, data 0x51baa80/0x5442000, compress 0x0/0x0/0x0, omap 0x8620d, meta 0x8369df3), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530506134s of 10.577964783s, submitted: 13
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654418 data_alloc: 234881024 data_used: 15718145
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 52584448 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 52584448 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 52584448 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x51bfa80/0x5447000, compress 0x0/0x0/0x0, omap 0x8620d, meta 0x8369df3), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 52584448 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x51bfa80/0x5447000, compress 0x0/0x0/0x0, omap 0x8620d, meta 0x8369df3), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 52584448 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b29adc800 session 0x555b2757f340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654153 data_alloc: 234881024 data_used: 15718145
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x51bfa80/0x5447000, compress 0x0/0x0/0x0, omap 0x8620d, meta 0x8369df3), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 52568064 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2fb63000 session 0x555b2966d880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2becb000 session 0x555b27d0f6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210690048 unmapped: 52559872 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210690048 unmapped: 52559872 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23be000/0x0/0x4ffc00000, data 0x51c6a80/0x544e000, compress 0x0/0x0/0x0, omap 0x86299, meta 0x8369d67), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 52543488 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 52543488 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654761 data_alloc: 234881024 data_used: 15718145
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 52543488 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.801294327s of 10.822642326s, submitted: 11
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b26aa8800 session 0x555b27d14000
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23b2000/0x0/0x4ffc00000, data 0x51d2a80/0x545a000, compress 0x0/0x0/0x0, omap 0x86299, meta 0x8369d67), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b27593c00 session 0x555b2a41a380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2701d800 session 0x555b297736c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23ac000/0x0/0x4ffc00000, data 0x51d7ae2/0x5460000, compress 0x0/0x0/0x0, omap 0x8666d, meta 0x8369993), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660465 data_alloc: 234881024 data_used: 15718145
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23ac000/0x0/0x4ffc00000, data 0x51d7ae2/0x5460000, compress 0x0/0x0/0x0, omap 0x8666d, meta 0x8369993), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 52535296 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23ac000/0x0/0x4ffc00000, data 0x51d7ae2/0x5460000, compress 0x0/0x0/0x0, omap 0x8666d, meta 0x8369993), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 211230720 unmapped: 52019200 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b29adc800 session 0x555b29891a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 211238912 unmapped: 52011008 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667567 data_alloc: 234881024 data_used: 15722241
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 52576256 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b29adc800 session 0x555b29773880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b26aa8800 session 0x555b2a445c00
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 52543488 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2701d800 session 0x555b281cc380
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.183748245s of 11.353111267s, submitted: 74
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b27593c00 session 0x555b2757ec40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 52543488 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2becb000 session 0x555b27d15dc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f23a7000/0x0/0x4ffc00000, data 0x51d0a80/0x5458000, compress 0x0/0x0/0x0, omap 0x86b89, meta 0x8369477), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 52527104 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b26aa8800 session 0x555b28e35a40
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 ms_handle_reset con 0x555b2701d800 session 0x555b2a41b880
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 470 ms_handle_reset con 0x555b27593c00 session 0x555b27d15500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662529 data_alloc: 234881024 data_used: 15722143
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 470 ms_handle_reset con 0x555b29adc800 session 0x555b2948b6c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 471 ms_handle_reset con 0x555b27593400 session 0x555b29772fc0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 471 heartbeat osd_stat(store_statfs(0x4f23ac000/0x0/0x4ffc00000, data 0x51d4260/0x545e000, compress 0x0/0x0/0x0, omap 0x86f44, meta 0x83690bc), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 471 ms_handle_reset con 0x555b27593400 session 0x555b28116a80
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 471 ms_handle_reset con 0x555b26aa8800 session 0x555b29891500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 52518912 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 ms_handle_reset con 0x555b2701d800 session 0x555b2a445500
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 ms_handle_reset con 0x555b27593c00 session 0x555b2a437340
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667329 data_alloc: 234881024 data_used: 15722127
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 210739200 unmapped: 52510720 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 ms_handle_reset con 0x555b2701cc00 session 0x555b2966c540
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 ms_handle_reset con 0x555b26aa8800 session 0x555b281161c0
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f2f46000/0x0/0x4ffc00000, data 0x4639e40/0x48c4000, compress 0x0/0x0/0x0, omap 0x875ed, meta 0x8368a13), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f2f46000/0x0/0x4ffc00000, data 0x4639e40/0x48c4000, compress 0x0/0x0/0x0, omap 0x876ee, meta 0x8368912), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571589 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.801860809s of 13.966397285s, submitted: 105
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f43000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574363 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f43000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f43000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f43000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574363 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f43000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 54075392 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.680708885s of 11.687252045s, submitted: 13
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209182720 unmapped: 54067200 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 54059008 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209223680 unmapped: 54026240 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'config diff' '{prefix=config diff}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'config show' '{prefix=config show}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209027072 unmapped: 54222848 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209158144 unmapped: 54091776 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209420288 unmapped: 53829632 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'log dump' '{prefix=log dump}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209420288 unmapped: 53829632 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf dump' '{prefix=perf dump}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf schema' '{prefix=perf schema}'
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209608704 unmapped: 53641216 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f2f45000/0x0/0x4ffc00000, data 0x463b8bf/0x48c7000, compress 0x0/0x0/0x0, omap 0x87879, meta 0x8368787), peers [0,2] op hist [])
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573643 data_alloc: 234881024 data_used: 11892367
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
Feb  2 13:12:59 np0005605476 ceph-osd[86737]: prioritycache tune_memory target: 4294967296 mapped: 209616896 unmapped: 53633024 heap: 263249920 old mem: 2845415832 new mem: 2845415832
